forum_id
stringlengths 9
20
| forum_title
stringlengths 3
179
| forum_authors
sequencelengths 0
82
| forum_abstract
stringlengths 1
3.52k
| forum_keywords
sequencelengths 1
29
| forum_decision
stringclasses 22
values | forum_pdf_url
stringlengths 39
50
| forum_url
stringlengths 41
52
| venue
stringclasses 46
values | year
stringdate 2013-01-01 00:00:00
2025-01-01 00:00:00
| reviews
sequence |
---|---|---|---|---|---|---|---|---|---|---|
SVP44gujOBL | A Simple Approach To Define Curricula For Training Neural Networks | [
"Vinu Sankar Sadasivan",
"Anirban Dasgupta"
] | In practice, sequence of mini-batches generated by uniform sampling of examples from the entire data is used for training neural networks. Curriculum learning is a training strategy that sorts the training examples by their difficulty and gradually exposes them to the learner. In this work, we propose two novel curriculum learning algorithms and empirically show their improvements in performance with convolutional and fully-connected neural networks on multiple real image datasets. Our dynamic curriculum learning algorithm tries to reduce the distance between the network weight and an optimal weight at any training step by greedily sampling examples with gradients that are directed towards the optimal weight. The curriculum ordering determined by our dynamic algorithm achieves a training speedup of $\sim 45\%$ in our experiments. We also introduce a new task-specific curriculum learning strategy that uses statistical measures such as standard deviation and entropy values to score the difficulty of data points in natural image datasets. We show that this new approach yields a mean training speedup of $\sim 43\%$ in the experiments we perform. Further, we also use our algorithms to learn why curriculum learning works. Based on our study, we argue that curriculum learning removes noisy examples from the initial phases of training, and gradually exposes them to the learner acting like a regularizer that helps in improving the generalization ability of the learner. | [
"Curriculum learning",
"neural networks"
] | Reject | https://openreview.net/pdf?id=SVP44gujOBL | https://openreview.net/forum?id=SVP44gujOBL | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"d-KIcBBMx23",
"qaD0C-_rzaP",
"BJWLD5cI4h",
"_GUKaIIgbXW",
"XgiwCWbo7ey",
"IlxEQLhReJm",
"em87MxTYhBs",
"MOFiq9KUA1h",
"SC2yW2Mfgwt",
"cRbsXenFBcL"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040437515,
1606136672901,
1606136057660,
1606135788027,
1606135523974,
1606135104042,
1603946689062,
1603778368543,
1603710456812,
1603662950317
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3564/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3564/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3564/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3564/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3564/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3564/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3564/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3564/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3564/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This paper proposed two algorithms for curriculum learning, one based on the the knowledge of a good solution (e.g. a local minima or a solution found by SGD) and another one proposed for natural image datasets based on entropy and standard deviation over pixels.\\n\\nReviewers seem to like the ideas behind the proposed algorithms and their simplicity. However, there are several major concerns that are shared among reviewers:\\n1- One of the algorithms needs knowledge of a good solution (e.g. a local minima or a solution found by SGD) which makes it impractical and the other one doesn't use any information about the mapping between input and the label.\\n2- Discussing previous work on curriculum learning, explaining how proposed algorithms are different than previous work and empirical comparison to other curriculum learning methods are lacking or need a significant improvement.\\n3- The experiment section needs improvement both in terms of experimental methodology and having more tasks/datasets.\\n\\nReviewers have done a great job at pointing to specific areas that need improvement. I hope authors would use reviewers' comments to improve their work.\\n\\nGiven the above major concerns, I recommend rejecting this paper.\"}",
"{\"title\": \"Reply to R1\", \"comment\": \"Thank you for your comments and suggestions for improvement. We have added all the related work that has been pointed out. We have corrected the statement about SGD to say that it proceeds by making unbiased estimates of the gradient on the full data.\", \"q1\": \"Unclarity in equation 1.\", \"a1\": \"We have corrected the inconsistencies between Figure 1 (in the revised draft) and the definitions in the text for equation 1. In Figure 1, $\\\\theta$ is the angle between ($\\\\bar{w} - w_3$) and $a_0$, this is a local definition and not related to the $\\\\theta_{i}^{\\\\tilde{t}}$ defined outside. $\\\\theta_{i}^{\\\\tilde{t}}$ is the angle between $\\\\nabla f_i (w_t)$ and $a_\\\\tilde{t}$.\", \"q2\": \"How many steps do we train for to obtain $\\\\tilde{w}$?\", \"a2\": \"We train the vanilla model once until convergence (in terms of training accuracy) to obtain $\\\\tilde{w}$, given $w_0$ (we make this clear in Section 3 of the revised draft). The actual number of steps varies according to different datasets and architectures. $\\\\bar{w}$ is a global optima and it is not clear we can reach that efficiently. We have clarifed the writeup to reflect this.\", \"q3\": \"Learning rates for the optimizer\", \"a3\": \"We use an exponential step-decay as the learning rate scheduler. We tune both the exponential decay factor and the steps after which the learning rate is decayed.\", \"q4\": \"Regarding Figure 3 (in the revised draft), \\u201cnoisy\\u201d data, and tuning k\", \"a4\": \"Figure 3 shows the performance of DCL+ with the same setup as experiment 2 (as mentioned in the caption) using different k values. In DCL framework, an example is \\u201ceasy\\u201d if its $\\\\rho$ value is low. That is, an \\u201ceasy\\u201d example has its gradient aligned towards the minima more than a \\u201chard\\u201d example\\u2019s gradient. DCL considers examples with high $\\\\rho$ values as \\u201cnoisy\\u201d since they misguide gradient descent away from the minima.\\nThe hyperparameter k in DCL is tuned by trial and error on the test set. The experiments we perform by varying the value of k help in understanding how CL serves as a regularizer.\", \"q5\": \"Dataset for DCL\", \"a5\": \"As rightly pointed out by the reviewers, the DCL algorithm is computationally very expensive. So, experimenting with larger datasets such as full MNIST and CIFAR would be laborious. Moreover, our intention is to use DCL as a framework to support our following arguments and not as a practical CL algorithm:\\na) Ordering of mini-batches within an epoch matter (comparing DCL+ and DCL-). \\nb) CL serves as a regularizer that helps in improving the generalization of the model by avoiding \\u201cvery hard\\u201d examples for training. DCL+ shows that a curriculum can be defined with gradient information. \\nHence, we only analyze the working of DCL on a hard dataset (small-mammals) and an easy dataset (MNIST with labels 0 and 1).\"}",
"{\"title\": \"Reply to R3\", \"comment\": \"Thank you for your comments and suggestions for improvement.\", \"q1\": \"How are speedups measured and how to verify the 40% number?\", \"a1\": \"The speedup is measured as the improvement over the vanilla model (we have made this clear in the last paragraph of Section 2 of the revised draft). It is computed using the training step value at which CL achieves the same final test accuracy as the vanilla model. For example, if the vanilla model converges to 90% test accuracy in 100 steps, and the CL model achieves 90% test accuracy at training step 50, then the speedup for the CL is 2x. This helps in understanding how good the CL performs when compared to the vanilla model when a target test accuracy is to be achieved. We calculate the speedups attained by CL in each of our experiments and report them along with the mean speedup.\", \"q2\": \"Speedup gains for DCL+\", \"a2\": \"As the reviewer rightly points out, DCL is computationally very expensive as it requires to compute gradients for the entire dataset to find an ordering. Our intention is to use DCL as a framework to support our following arguments and not as a practical CL algorithm:\\na) Ordering of mini-batches within an epoch matter (comparing DCL+ and DCL-). \\nb) CL serves as a regularizer that helps in improving the generalization of the model by avoiding \\u201cvery hard\\u201d examples for training. DCL+ shows that a curriculum can be defined with gradient information. \\nBut, stddev+/- and entropy+/- do not require the vanilla model to be trained. Hence, our proposed CL algorithms based on statistical measures are beneficial in practice.\", \"q3\": \"How to sample using $\\\\rho_{t,i}$?\", \"a3\": \"As the reviewer rightly points out, $\\\\rho_{t,i}$ is not a distribution but a scoring function. It is only for the purpose of sorting the data points, as mentioned in Algorithm 1.\", \"q4\": \"Interference of Algorithm 1 with bucketing in seq2seq models\", \"a4\": \"Our algorithms aim towards defining CL for image classification tasks. However, the sorting of data points could be performed within the buckets for seq2seq models.\"}",
"{\"title\": \"Reply to R4\", \"comment\": \"Thank you for your comments and suggestions for improvement.\\n\\nIn Figure 7, the term \\u201ccorrelation\\u201d is misleading, we intend to say that the plots show the relation between the rank according to the stddev and the $\\\\rho_{i,t}$ values. We agree that the relationship between the two quantities in the bottom row (small mammals dataset) seems much weaker than the top one (we have added more text regarding Figure 7 on Page 9, first paragraph in the revised draft).\", \"a1\": \"We have compared two of our experiments to Hacohen & Weinshall (2019) as a baseline in Figure 6 a,b.\", \"a2\": \"Bengio et al. (2009) introduced the notion of \\u201cnoisy example\\u201d for CL in their work. In our work, we intend to define \\u201cnoisy examples\\u201d differently based on their gradient values, as the reviewer rightly understands. It is not clear to us whether these notions can be unified.\", \"a3\": \"DCL- shows a much worse test loss in the initial phases of training. Hence, we truncate that part in order to clearly show the improvement of DCL+ over vanilla towards convergence.\"}",
"{\"title\": \"Reply to R2\", \"comment\": \"Thank you for your comments and suggestions for improvement.\", \"a1\": \"We will clarify this point. The DCL algorithm does not really need an optimal set of weights. As mentioned briefly (in the second last paragraph of page 4 in the revised draft), we can run the DCL algorithm in the following manner -- for a given initialization of weight ($w_0$), the weight that vanilla SGD converges to ($\\\\tilde{w}$) is taken as an approximation for global minima ($\\\\bar{w}$). Our empirical analysis shows that DCL finds an ordering of the data points that leads to faster convergence of the model from $w_0$ to $\\\\tilde{w}$. Empirical results show that, in fact, DCL reaches a better solution than $\\\\tilde{w}$.\", \"a2\": \"As the reviewer rightly points out, DCL is computationally very expensive as it requires to compute gradients for the entire dataset to find an ordering. Our intention is to use DCL as a framework to support our following arguments and not as a practical CL algorithm:\\na) Ordering of mini-batches within an epoch matter (shown using the comparative performance of DCL+ and DCL- in Figure 2 of the revised draft), and \\nb) CL serves as a regularizer that helps in improving the generalization of the model by avoiding \\u201cvery hard\\u201d examples for training (shown by varying k). \\nDCL+ shows that a curriculum can be defined with gradient information. The practical scoring functions that we suggest -- stddev+/-, entropy+/- etc. can be computed efficiently using standard libraries.\", \"a3\": \"We do not know what fraction of the dataset is \\u201cnoisy\\u201d while training. Hence, the hyperparameter k in DCL is tuned by trial and error. The experiments we perform by varying the value of k help in understanding how CL serves as a regularizer.\\nHowever, we can compare the performance of DCL+ with varying k values by running them for one training epoch. In our experiments, we find that the k value that performs the best (decided by looking at the slope of the learning curve) for one epoch is a good choice for a full training of DCL+.\"}",
"{\"title\": \"Rebuttal revision\", \"comment\": [\"We first wish to thank the reviewers for their detailed comments and suggestions. Based on the reviews, we have made changes to our manuscript. Here is a list of the main edits that we made:\", \"Added a few more references on works related to optimizers in Section 1.\", \"Added a subsection (1.1) for related works. Recent works on reweighting and ordering are added.\", \"We make our contributions clearer in subsection 1.2.\", \"Algorithm 1 and Figure 3 from the previous version are now rightly placed.\", \"We address the concerns regarding notations in the Preliminaries section.\", \"Details on how to measure speedup are mentioned in Section 2.\", \"Corrected Figure 3 (from the previous version) to be consistent with equation 1.\", \"Text added in Section 3 to address the concerns regarding DCL.\", \"The parts of the text that mentions baseline and experimental setup are highlighted.\", \"We hope we have addressed the major issues raised by the reviewers in our revised manuscript and replies.\"]}",
"{\"title\": \"The paper is of limited novelty and poorly written.\", \"review\": \"Summary:\\nThis paper studies curriculum learning and proposes two methods to order the examples by (1) gradient information and (2) statistical measures like standard derivation and entropy. The experiment results show that the proposed curriculum learning strategies can speed up the convergence by a large margin and the authors provide some insights about why curriculum learning works.\", \"strengths\": \"1. The proposed \\\"dynamic curriculum algorithm\\\" can speed up the convergence by ~45% and the proposed task-specific curriculum strategy based on standard deviation and entropy can yield an average speedup of ~43%.\\n2. The code and data are shared and helpful for reproducing the experiments conducted in the paper.\", \"weaknesses\": \"1. The paper is poorly written. There are even no sections to discuss related works and experimental settings in the main paper. Although some related works are discussed scatteredly in the paper, it might be helpful to have a specific section to compare related works with the proposed methods, which makes it much easier to identify the contributions and novelty of this work. Besides, although I found the experimental settings in the supplementary, the main paper at least should have discussed the basic experimental setup to understand how the experiments are conducted.\\n\\n2. The proposed DCL algorithm requires an optimal weight or a local optimal weight to calculate the difficulty scores. This requirement is unreasonable and renders the proposed methods useless.\\n\\n3. The proposed scoring function (the equation at the end of page 3) requires to compute the gradients on each sample. Perform back-propagation and computing gradient are highly prohibitive. The learning curves against the time cost should also be reported, in complement to the learning curves against the training steps in Fig.1.\\n\\n4. The pace function is just a constant, dependent on a tunable hyperparameter k. From Fig.2, it seems that the value of k has a large impact on the testing accuracy. It is not mentioned in the paper how the value of k is selected.\", \"suggestions_for_improvement\": \"1. It might be better to have a specific section to discuss related works and compare them with the proposed method.\\n\\n2. Index all equations.\", \"questions\": \"\", \"the_questions_to_be_addressed_in_the_rebuttal_are_listed_below\": \"1. Where does the optimal weight in DCL comes from? Can the authors justify why a given optimal weight can be used during the training?\\n\\n2. What is the time cost of the scoring function?\\n\\n3. How is the value of k in the pace function tuned?\\n\\n-------------------------------Post-rebuttal-------------------------------\\n\\nThank you for revising the submission and the clarification in the rebuttal. After reading the rebuttal and other reviews, my main concerns about the novelty and computation cost are still unsolved. Therefore, I will keep my original score.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting ideas, but execution leaves something to be desired.\", \"review\": [\"#### Summary\", \"This paper considers curriculum learning for neural networks in the context of supervised learning (specifically image classification).\", \"First, the authors propose and assess a DCL+ algorithm. DCL+ uses a scoring function based on the alignment of an example's gradient with the vector from the current weight to a local minimum weight (obtained via a previous \\\"vanilla\\\" run with standard SGD). The pacing function is a constant fraction of the dataset size. The effect is that for a given epoch, only the subset of data that induces gradients which most point towards the local minimum is used for training. DCL+ is empirically shown to result in marginal improvement over the vanilla run in terms of test performance at convergence, and a significant speedup to reach vanilla performance. To investigate the effect of minibatch ordering within an epoch, the authors propose DCL-, an ablation that reverses the DCL+ minibatch ordering. Since DCL- performs worse than DCL+, the authors argue that minibatch ordering matters.\", \"Second, the authors propose and assess a few curricula based on scoring functions that only use per-example statistics, e.g. standard deviation (stddev) or entropy of pixel values. Interestingly, it seems that for CNNs on CIFAR tasks, using stddev- (descending order of standard deviation) as the scoring function is best, while for MLPs on MNIST tasks, using stddev+ (ascending order of standard deviation) is best.\", \"The authors connect their DCL+ algorithm to Bengio et al. (2009), which argues that a successful strategy for CL methods is to remove \\\"noisy\\\" examples. They also provide scatterplots of the data under DCL+ and stddev scoring functions.\", \"#### Strengths\", \"The algorithmic innovations considered are simple.\", \"The comparison between DCL+ and DCL- shows that minibatch ordering can matter in CL.\", \"If one is willing to treat the unsupervised curriculum as a tunable hyperparameter (e.g. sweep over stddev+, stddev-, and vanilla), then the experiments suggest that we obtain some gain in test performance. A caveat is that this only is supported for the few relatively toy image classification datasets and architectures considered in the paper.\", \"#### Weaknesses\", \"For a purely empirical paper, I would not say that the experiments are very comprehensive. For example, a clear discrepancy exists between Exps. 3-5 and Exps. 6-7 (in the former, stddev- is best, while in the latter, stddev+ is best, and most of the time the opposite curriculum is worse than vanilla). No attempt is made to discuss this or tease out the underlying reason: is it the dataset (CIFAR vs. MNIST) or the model (CNN vs MLP), or something else?\", \"Conceptually, the curricula defined by image statistics have a critical weakness: they are completely agnostic to the label. This is in contrast with both DCL+/- and Bengio et al. (2009), which both consider \\\"example noise\\\" grounded in the full supervised task. The same image (and therefore statistics) with different labels (say, correct vs. incorrect) would play drastically different roles in a curriculum.\", \"The connection between the notion of a \\\"noisy example\\\" as considered by Bengio et al. (2009) (misclassified by a Bayes classifier) and that of DCL+ (the example's gradient has strong anti-alignment with the direction to the local optimum) is not made technically clear or explicit.\", \"Regarding Figs. 1, 5, and 6: I appreciate that the authors included a measure of uncertainty based on 25 or 30 trials, but I would strongly suggest adding significance tests for a more compelling analysis. This is sorely needed for, e.g., Experiment 4.\", \"Fig. 7: The caption and the text imply that you'd provide correlations, but this is missing. Also, I don't see significant correlations for the bottom row.\", \"#### Recommendation\", \"I currently recommend rejection (4). While the ideas are simple and interesting, the weaknesses in this submission preclude it from being very informative or useful to the community.\", \"#### Questions\", \"Why does this work not consider comparisons with prior CL methods other than the vanilla baseline? If none are applicable, please explain why.\", \"Can you unify the notion of a \\\"noisy\\\" example as considered by Bengio et al. (2009) and by DCL+?\", \"Fig. 1: Why does subplot (a) only show a truncated x-axis?\", \"#### Minor suggestions\", \"p. 2: What is \\\"theoretical evidence\\\"? Do you mean that there is a lack of theory, or a lack of empirical evidence, or both?\", \"p. 2: Attribute the concepts of scoring and pacing functions to Hacohen and Weinshall (2019).\", \"In general, the clarity of the writing (e.g. sentence style, diction) can be improved by a few careful passes.\", \"-------------------------------Post-rebuttal comments-------------------------------\", \"Thank you for taking the time to revise your submission. I will maintain my original score of 4. The main justification for this is that the two main weaknesses I see in this paper (the first two I list in the original review) remain unresolved, and it indeed is unclear whether the second one could feasibly be addressed without significant changes to the core methodology currently proposed.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Simple but requires unreastic knowledge and gains are small\", \"review\": \"The paper contains two curriculum learning algorithms of which one assume knowledge of the parameters found by the baseline, uniform-sampling, model to push updates in that direction, and the second orders images according to an increasing stddev/entropy of pixels. While the first approach is impractical because of the strong assumption, the second approach demonstrates small gains that lie within random variance (Fig. 5, Fig. 6) and would be not straight-forward to apply to non-image data e.g. text. These reasons make the paper hard to accept.\\n\\nThe main problem is knowing the parameters of the baseline, SGD, optimization. It's not clear why would one even need optimization again, if (a good enough) result is already known and gains from this re-optimization do not significantly improve over this baseline. The speedups mentioned in the abstract (45% and 43%) could not be located in the results in main body of the paper. How were they measured? Even if aligning updates with the SGD-trained parameters does speed up convergence, re-training from scratch will cost 143% of baseline time instead of 43%, as the standard training needs to be counted too.\", \"issues_include\": [\"How to sample using \\\\rho_{t,i}? It's not a distribution and can be negative.\", \"Figure 1: Judging from the plot, the vanilla curve converges faster than the curriculum. How can one see the >40% curriculum speed up?\", \"Abstract's claim of removing noise is only supported in Section 4 through citing related works. Also, more evidence would be needed to call k a regularizer.\", \"lines 9-10 in Algorithm 1 would interfere with bucketing in seq2seq applications and adversely affect performance.\", \"Regarding related work in Sec. 3: I couldn't confirm in (Graves et al, 2017) that they also sort examples by difficulty.\", \"The last approach to define curriculum through statistical quantities makes sense, in principle, although the difference between curves in Fig. 5 is very small and could be caused by random variance as the error bars on Fig. 5 and Fig. 6 show. Another problem is that it's straight-forwardly applicable only to images and not categorical data, like text.\"], \"one_suggestion_of_possible_paper_improvement\": \"consider swapping and reworking sections 5 and 3, so that the content of sec. 5 becomes the main proposal and a reworked sec. 3 - its analysis. There one could analyze if the example ordering according to stddev does bias updates towards some \\\"good\\\" point of convergence, with one possible definition of \\\"good\\\" according to (now, unknown during optimization) SGD results.\", \"other_minor_remarks\": [\"\\\"greedy approach\\\" is mentioned multiple times before being explained on page 4. Consider deferring the use of term to that place.\", \"Contributions: useful is a vacuous word, consider dropping it.\", \"notation: square brackets used to denote several objects - sequences [B1, B2, ..] , ranges [T] and vectors [x1, x2, .. ]. Using different brackets could be better.\", \"well-known concepts:\", \"no need to define stddev and mean in (2)\", \"(Arora, 1981): if entropy requires a citation at all then citing Shannon directly would be more appropriate.\", \"Sec. 2: curriculum is defined by two functions -> we define curriculum by two functions\", \"conclusion: display -> show\", \"while CL indicate that -> while CL indicates that\", \"judicial ordering -> judicious ordering\", \"=== After rebuttal ===\", \"Thank you for your answers. I'm keeping the rating at 3.\", \"1)+2). I'm still not convinced that it's fair to claim an improvement of X% for a curriculum that, relying on final weights of a \\\"vanilla\\\" SGD-trainedmodel, converges in _additional_ X% to the 100% of \\\"vanilla\\\" time.\", \"3). Fair enough, but the revised draft still reads like examples are sampled from it.\", \"I double checked the context of citing (Graves et al, 2017) and believe it's still imprecise as in the original draft.\"], \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting ideas but much work is needed to formalize them and evaluate them correctly.\", \"review\": \"Summary:\\nThis work studies a number of curriculums for faster training of neural networks. They first propose a curriculum named DCL+ that is designed to order data points based on their alignment of gradient with the direction of optimization. This curriculum depends on the evaluation of individual gradients of datapoints as well as an approximation to a local optima. Next, they study a number of easy-to-compute statistical measures for ordering data points.\", \"pros\": [\"The idea of ordering points based on using alignment of their gradients with the direction to the local optima is interesting.\", \"The idea of using easy-to-compute statistical measures of data is also interesting.\"], \"cons\": [\"The empirical setup needs improvement. Important baselines are missing as described below. Standard datasets are not used and the ones used are relatively easy. The combination of these makes it hard to make any conclusions.\", \"The ideas are not formalized well enough. Specifically, important details in the definition of optimal weights in DCL is missing. Also, it is not clear why standard deviation of an image or its entropy could be a proxy for how useful a data point is for training.\"], \"detailed_notes\": \"\", \"intro\": [\"The discussion about SGD and the generalization of other optimization methods is missing recent works such as [1].\", \"It is said that \\u201c...SGD samples examples from data uniformly at random.\\u201d This is an inexact description of common training setups. We can use SGD with any stochastic estimate of gradients as long as it is unbiased. It doesn\\u2019t have to be from uniformly sampling a subset. See for example [2].\", \"Related works on example reweighting and ordering are missing. For example [3], [4], and [5].\"], \"section_3\": [\"Eq 1: 1) theta is not defined, 2) why isn\\u2019t the first equality an inequality?\", \"Figure 3: 1) what is theta? 2) Why is R3 marked as the distance between w3 and w0? Shouldn\\u2019t it be the distance between w3 and bar{w}?\", \"\\u201cWe approximate w \\u0304 with w \\u0303, which is a local minima obtained from training the vanilla SGD model\\u201d, isn\\u2019t the goal of the entire training to find w^bar? How many steps do you train for to get w~?\", \"\\u201cscore(x)\\u201d change of notation, score was defined as a function of both x and y in Section 2.\", \"\\u201cWe use learning rates with an exponential decay rate for the optimizers\\u201d what is an exponential decay rate? How do you tune? Tune both base and the exponent? Why is it fair? There are numerous other learning rate schedules that are more common like the step-decay.\", \"Figure 2: what is the dataset? What is the task? Why is a data point noisy if it has a low score? Does this figure mean the hyperparameters are tuned on the test set rather than a validation set?\", \"Experiment 1 is mnist with labels 0 and 1. As noted in the text this task is very easy. Both DCL- and DCL+ eventually seem to classify the test set correctly.\", \"Experiment 2 is small-mammals dataset. No citation is given. It is said to be a super-class of CIFAR-100. No comparison is done with optimization methods other than SGD. No references are given for prior baselines on this dataset. No reason is given for not trying out the proposed method on common benchmarks such as full MNIST, CIFAR-10, and CIFAR-100.\", \"Figure 1: the placement of this figure is before Figures 2, 3 and algorithm 1 but it is referred to after them. The caption is not descriptive enough as it refers to experiments 1 and 2 that doesn\\u2019t make it easy to understand the figure without going back and forth between the figure and the text.\"], \"section_5\": [\"There is no empirical comparison with prior works.\", \"There is no empirical comparison between the DCL method proposed in prior sections.\", \"Figure 6: same problem in captions as Figure 1.\"], \"other\": \"- I could not uncompress the supplementary material. It needed PK compatibility V4.6 which does not come in the standard zip package.\\n\\n[1] Choi, Dami, et al. \\\"On empirical comparisons of optimizers for deep learning.\\\" arXiv preprint arXiv:1910.05446 (2019).\\n[2] Zhao, Peilin, and Tong Zhang. \\\"Stochastic optimization with importance sampling for regularized loss minimization.\\\" international conference on machine learning. 2015.\\n[3] Loshchilov, Ilya, and Frank Hutter. \\\"Online batch selection for faster training of neural networks.\\\" arXiv preprint arXiv:1511.06343 (2015).\\n[4] Katharopoulos, Angelos, and Fran\\u00e7ois Fleuret. \\\"Not all samples are created equal: Deep learning with importance sampling.\\\" arXiv preprint arXiv:1803.00942 (2018).\\n[5] Chang, H. S., Learned-Miller, E., & McCallum, A. (2017). Active bias: Training more accurate neural networks by emphasizing high variance samples. In Advances in Neural Information Processing Systems (pp. 1002-1012).\\n\\n\\n=============\", \"after_rebuttal\": [\"Thank you for improving the clarity. Unfortunately, the following issues are still unresolved to me:\", \"Figure 1: this geometrical argument seems to be at the core of Eq. 1 but I still have a hard time understanding it. You might want to formalize the argument in text.\", \"Experiments are not convincing. Even the original MNIST is not a representative dataset for optimization methods. The theoretical contributions of this work is not enough to justify having limited experiments.\", \"The rebuttal says: \\\"The hyperparameter k in DCL is tuned by trial and error on the test set.\\\". Does that mean mistakenly using the test set as the validation set?\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
7EDgLu9reQD | SALD: Sign Agnostic Learning with Derivatives | [
"Matan Atzmon",
"Yaron Lipman"
] | Learning 3D geometry directly from raw data, such as point clouds, triangle soups, or unoriented meshes is still a challenging task that feeds many downstream computer vision and graphics applications.
In this paper, we introduce SALD: a method for learning implicit neural representations of shapes directly from raw data. We generalize sign agnostic learning (SAL) to include derivatives: given an unsigned distance function to the input raw data, we advocate a novel sign agnostic regression loss, incorporating both pointwise values and gradients of the unsigned distance function. Optimizing this loss leads to a signed implicit function solution, the zero level set of which is a high quality and valid manifold approximation to the input 3D data. The motivation behind SALD is that incorporating derivatives in a regression loss leads to a lower sample complexity, and consequently better fitting. In addition, we provide empirical evidence, as well as theoretical motivation in 2D that SAL enjoys a minimal surface property, favoring minimal area solutions. More importantly, we are able to show that this property still holds for SALD, i.e., with derivatives included.
We demonstrate the efficacy of SALD for shape space learning on two challenging datasets: ShapeNet that contains inconsistent orientation and non-manifold meshes, and D-Faust that contains raw 3D scans (triangle soups). On both these datasets, we present state-of-the-art results. | [
"implicit neural representations",
"3D shapes learning",
"sign agnostic learning"
] | Accept (Poster) | https://openreview.net/pdf?id=7EDgLu9reQD | https://openreview.net/forum?id=7EDgLu9reQD | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"BCBLoeACvXs",
"WOtkWx5hY2c",
"Bw_Ft-Od4g",
"fwuVX8MV1T7",
"xz4sv-v-maa",
"f7lH5WYnFp",
"fg5BgpNxa7",
"SPeZ1KFRk-o",
"GuoA1qVc2I",
"uwD4wkOHf2b",
"Nc-cVYiVKW",
"9vsv9fecdRA"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040389474,
1606256897779,
1606256732316,
1606256652357,
1605869761634,
1605869634180,
1605869174095,
1605868895415,
1604601810339,
1604458677430,
1603918648202,
1603818094502
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3563/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3563/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3563/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3563/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3563/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3563/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3563/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3563/AnonReviewer5"
],
[
"ICLR.cc/2021/Conference/Paper3563/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3563/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3563/AnonReviewer4"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Poster)\", \"comment\": \"Congratulations! The reviewers unanimously viewed this work positively and were in favor of acceptance to ICLR.\\n\\nWhile the current revision already addresses many reviewer concerns, it may be worth adding some of the datasets pointed out by R3 or comparing to some of the papers suggested by R1.\"}",
"{\"title\": \"Revision looks good\", \"comment\": \"Thank you for the additional experiments and for the updated draft. I feel like they improve the quality of the paper overall. Given these changes, I feel the paper is in a good enough state to accept for publication.\"}",
"{\"title\": \"Response to reviewer4\", \"comment\": \"Thank you for suggesting to test the sample complexity hypothesis. We have uploaded a revised version of the paper which now includes both experiments you suggested: shape reconstruction, and latent shape reconstruction using a trained auto-decoder. The details are in section 4.3.\"}",
"{\"title\": \"Thank you for the reviews\", \"comment\": \"With this discussion period coming to an end, we would like to thank the reviewers for their constructive suggestions and remarks; we really feel it improved the paper. We have uploaded another revised version of the paper which includes a sample complexity experiment.\"}",
"{\"title\": \"Response to reviewer4\", \"comment\": \"**Q: In particular I would be more convinced by an experiment showing the degradation of SAL vs SALD as the number of available samples for a shape is decreased when (a) regressing a single shape directly from data (such as in IGR [1] Section 6), and (b) regressing a shape using an auto-decoder.**\\n**A:** Thank you for this suggestion: the paper indeed benefits from such an experiment. We added an experiment to the revised paper (see section 4.3), addressing the question of regressing a single shape. As can be learned by this experiment, SALD indeed enjoys better sample complexity than SAL, especially for low sample sizes. Our plan is to include also the experiment on regressing a shape using an auto-decoder in the next revision.\\n\\n**Q: Showing that global minima to SAL may satisfy the minimal surface property is indeed quite interesting. I do feel however that the claim in the paper regarding this is a bit oversold\\u2026.I feel the contribution should be rephrased to something along the lines of \\\"We give empirical evidence and theoretical motivation that minimizers of SAL-type losses produce solutions satisfying the minimal surface property\\\".**\\n**A:** We accept this comment and have edited the abstract and the introduction accordingly. \\n\\n\\n**Q: I imagine that computing losses on gradients of networks is quite expensive. How much is the increase in runtime compared to the gains in accuracy?**\\n**A:** Thank you for this comment. Indeed there is an additional computational cost for calculating the gradients of the network. We added a new section in the supplementary providing computational timings and memory footprint (see section A.2.3). \\n\\n[1]: implicit geometric regularization for learning shapes. InProceedings of Machine Learning and Systems 2020, 2020.\"}",
"{\"title\": \"Response to reviewer1\", \"comment\": \"**Q: My biggest concern is the motivation to learn sign distance function from its unsigned observations. For data (ShapeNet and FAUST) used in this paper, signed distances are immediately available -- one can easily convert a mesh to its implicit representation.**\\n**A:** We respectfully disagree. In ShapeNet many of the models are non-manifolds with inconsistent normals orientation and computing the signed distance is a non-trivial task. For example, DeepSDF [1] computes the signed distance supervision using a rendering procedure which provides only approximation to the signed distance function and suffers from several drawbacks such as failure in presence of holes and occluded and invisible areas (e.g., the cars\\u2019 interior in figure 1). The data used in the DFaust experiment consists of raw scans. These raw scans have many \\u201creal-life\\u201d defects such as: holes, ghost geometry and noise. Also in this case, computing a signed distance function is rather challenging, see e.g., [2] figure 6 that demonstrates attempts to directly compute the SDF from the raw data. \\nTo summarize, computing SDF directly from raw data with holes, noise, and occluded parts is a highly non-trivial problem which is at the heart of the surface reconstruction field. \\nAnother drawback of the DeepSDF approach is based on its \\u201ctwo stage solution\\u201d: First, Infer the reconstruction individually for each shape; and only then learn a shape space from the extracted 3D supervision. Note that in the first stage each surface is considered **independently** from all other surfaces in the dataset. An advantage in SALD is the ability to learn the signed representations and the shape space **together**.\\n\\n\\n**Q: There are multiple existing works on this direction which this paper doesn't mention (or briefly mentions but doesn't compare to). E.g, \\\"Deep geometric prior for surface reconstruction\\\" and \\\"Point2Mesh: A Self-Prior for Deformable Meshes\\\".**\\n**A:** Deep geometric prior for surface reconstruction is a surface reconstruction method, based on an Atlas parametrization of a surface, that was not used for shape space learning. We added a citation to Point2Mesh in our revised previous work. However, Point2Mesh is another surface reconstruction method not adapted for shape space learning. Thus, we find these works are not natural baselines to our experiments. More relevant to work is AtlasNet which is also a parametrization based method. However, a comparison of SAL versus AtlasNet is already done in [2] , establishing AtlasNet as inferior to SAL (which is inferior to SALD, as we show in this paper) at the task of shape space learning on the D-Faust raw scans dataset. The relevant details are mentioned in the paper (see the first paragraph in section 4.2). \\n\\n**Q: In the implementation detail, the paper says it uses a similar architecture to DeepSDF in the auto-decoding case. However, the method shows improvements over DeepSDF. This seems impossible given that DeepSDF learns from direct signed distance supervision. So I am wondering if this is due to model size differences. I'd like to see more comparisons to DeepSDF under exactly the same model capacity.**\\n**A:** All methods in the experiment section: DeepSDF, IGR, SAL, SALD use the **exact same** architecture of 8 layers MLP, with 512 hidden units. The only difference is that SALD use smoothed-ReLU (Softplus) instead of ReLU activation for continuous gradient computations. The improvement in reconstruction quality of SALD with respect to DeepSDF can be attributed to the following properties: i) SALD learns directly on the input shape, containing occluded parts, whereas DeepSDF uses approximated signed distance supervision derived only from visible parts. ii) DeepSDF does not exploit normal information explicitly. In fact, many of the shapes in ShapeNet have inconsistent normals orientation, a challenge alleviated with the SALD loss. \\n\\n[1]: DeepSDF: Learning continuous signed distance functions for shape representation. InThe IEEEConference on Computer Vision and Pattern Recognition (CVPR), June 2019.\\n[2]: SAL: Sign agnostic learning of shapes from raw data. InIEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.\"}",
"{\"title\": \"Response to reviewer3\", \"comment\": \"**Q: In fact, aligning gradients of the implicit surface with the ones of the data is not a new idea and has been done for instance in quadric fitting**\\n**A:** Thank you for pointing out these references, we have added them to our revised previous work section. \\n\\n**Q: In addition to aligning the gradients, many works benefit from constraining the gradient norm of the implicit function be $|\\\\nabla f| = 1$. Can we think of a similar approach here?**\\n**A:** Thank you for this interesting question. We think that the main difference between SALD to gradient norm penalty methods (such as IGR [1]) relates to the way the learned implicit function completes missing parts. The SALD loss explicitly regularizes for solutions to possess minimal surface area, whereas in IGR the regularization leads to constant curvature-like solutions. For example, in the D-FAUST experiment (section 4.2), we see that the SALD approach is more suitable for completing missing parts in the areas of the human feet (see figure 8 in the paper). We revised the paper to include more details about this issue: see the paragraph about the minimal surface property in section 3 and figure A.1 in the appendix. \\nLastly, in the revised conclusions section we added a comment that incorporating sign-agnostic losses with gradient norm penalty is an interesting future work direction, potentially combining the advantages from both methods. \\n\\n**Q: Compare against the variants of DeepSDF (MetaSDF and Curriculum DeepSDF).**\\n**A:** We believe these methods solve different problems than SALD, and therefore do not serve as natural baselines. Curriculum DeepSDF is a method suggesting a weighted signed distance regression, where the weights are extracted based on 3D supervision (such as the sign information). As our method addresses the problem of learning signed solutions **without** 3D supervision, it is not immediately clear how to incorporate this into the sign-agnostic framework.\\nMetaSDF is a recent approach for shape space learning, based on ideas from Meta Learning. In our paper, we incorporate SALD in Auto-Decoder (AD) and Variational Auto-Encoder (VAE) which are two other state-of-the-art shape space learning architectures. Indeed, SALD can also be incorporated in MetaSDF. Although the choice of the shape space learning architecture and method is a very interesting research question, we feel it is somehow orthogonal to SALD contribution, concentrating on the reconstruction loss rather than shape space generalization. We therefore leave this to be investigated in future works.\\n\\n**Q: Would it be possible to include additional real objects that are non-humans?**\\n**A:** First, please note that ShapeNet models are non-human models that are very often modeled as triangle soups, that is, not manifolds and possess inconsistent normals. In that aspect the ShapeNet experiments in the paper provide real non-human objects. As to **raw scans** of non-human objects - we are not aware of such freely available large-scale dataset and we agree the community would benefit tremendously from such a dataset.\\n\\n\\n**Q: discussions on the following aspects could be valuable for the reader: (i) What would be a good suggestion to handle thin-structures? (ii) The use of raw point sets is good, but such data usually come partially observed. Could this method support partial observations?**\\n**A:** (i) We added a discussion to the revised paper about learning thin-structures with implicit neural representation (see section 4.4 in the paper). (ii) SALD is a suitable method to tackle partial observations up to some extent due to its minimal surface property. For instance, many of the D-Faust raw scans contain holes which are gracefully completed by SALD. More challenging scenarios (i.e., large missing parts) should probably be treated with appropriate shape space regularization, which is again a very interesting research direction but outside the scope of the current paper. \\n\\n\\n**Q: Can we already compare D and D' and give an intuition about what they might refer to at the place they are first defined?**\\n**A:** We added such an explanation in the revised paper. \\n\\n[1]: implicit geometric regularization for learning shapes. InProceedings of Machine Learning and Systems 2020, 2020.\"}",
"{\"title\": \"Response to reviewer5\", \"comment\": \"**Q: Maybe I missed this somewhere, but if the derivatives are sign agnostic, couldn't it happen that the inside is positive? Did the authors encounter that in some cases?**\\n**A:** The SALD loss does not encourage a specific sign inside of the shape. Namely, both solutions (i.e., negative inside and positive outside, and vice-versa) are local minima. However, each of these solutions is stable. That is, continuously moving from one signed solution to another (during optimization) would yield a significant increase in the SALD loss. In practice, the solutions SALD converge to are the ones where the sign of the inside of the shape is always negative. This is a result of the geometric initialization scheme used for SALD (see figure 3 in [1]).\\n\\n[1]:SAL: Sign agnostic learning of shapes from raw data. InIEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.\"}",
"{\"title\": \"SALD review\", \"review\": \"This paper is based on the \\\"sign agnostic learning\\\" (SAL) method for capturing signed distance functions with neural networks. It extends this method by incorporating derivative information, which interestingly can likewise be handled in a sign agnostic manner. (Maybe I missed this somewhere, but if the derivatives are sign agnostic, couldn't it happen that the inside is positive? Did the authors encounter that in some cases?)\\n\\nThe paper presents and motivates this extension together with an additional theoretical insight about the minimal surface property of SAL and SALD. In line with SAL, the paper presents a nice variety of results for shapes from different shape databases. The quantitative results are also convincing. It's interesting to see the substantial difference between the VAE and AD architectures. For the comparison with SAL it's good to see the direct improvements from the derivative loss with a VAE.\\n\\nThe paper leans heavily on SAL, and the change in terms of the overall method seems to be fairly small. Nonetheless, I think it's an interesting insight that the sign agnostic derivatives can be included in this way, and I found it interesting to see how much they improve the results.\\n\\nGiven that learning signed distance functions is a very active topic, and a very useful building block for a variety of adjacent works that use learned SDFs, the proposed SALD approach seems like a very nice advancement of the state of the art.\\n\\nSo, overall, I really liked the paper. Figure 2 alone is impressive, and makes a good case for the method. Together with the nice presentation and set of results I think this paper makes for a very good addition to ICLR.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"A good paper addressing an important problem.\", \"review\": \"This paper presents SALD, a new type of implicit shape representation that, in addition to predicting the signed distance function, aligns the gradients of the distance function with that of the neural distance field. The resulting algorithm, for example, has improved approximation power and better preserves the sharp features than the ancestor SAL (sign agnostic learning). The formulation is such that the architecture can consume raw point clouds. \\n\\nSTRENGTHS\\n\\nThis paper certainly speaks to me. First of all, learning implicit representations directly from raw point clouds can allow for interesting applications such as better generative models or efficient 3D reconstruction networks. The approach is very sensible. In fact, aligning gradients of the implicit surface with the ones of the data is not a new idea and has been done for instance in quadric fitting:\\n* Birdal, T., Busam, B., Navab, N., Ilic, S., & Sturm, P. (2019). Generic primitive detection in point clouds using novel minimal quadric fits. IEEE transactions on pattern analysis and machine intelligence, 42(6), 1333-1347.\\n* Tasdizen, T., Tarel, J. P., & Cooper, D. B. (1999, June). Algebraic curves that work better. In Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149) (Vol. 2, pp. 35-41). IEEE.\\n\\n[the paper might benefit from including those especially because it has related work sections called 'primitives' and 'implicit representations'.].\", \"this_is_not_a_drawback_but_just_the_opposite\": [\"there is a strong prior evidence that such approaches are useful. I also like that the authors spend a reasonable amount of effort for theoretical analysis. Though, I believe that this can be extended to more realistic scenarios (as the authors aptly explained in the limitations).\", \"WEAKNESSES / ISSUES\", \"In addition to aligning the gradients, many works benefit from constraining the gradient norm of the implicit function be |\\\\nabla| = 1. See for instance:\", \"Slavcheva, Miroslava, et al. \\\"Killingfusion: Non-rigid 3d reconstruction without correspondences.\\\" Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017.\", \"Can we think of a similar approach here? Could the paper show some ablations with regularizers concerning the gradient norm?\", \"Nowadays, the use of implicit 3D representations is omnipresent. In the evaluations, would it be possible to compare against the variants of DeepSDF (e.g. Curriculum DeepSDF or MetaSDF etc.)? With that, it might also be nice to include some more qualitative results in the supplementary.\", \"Would it be possible to include additional real objects that are non-humans? This might involve for instance cars in an autonomous driving scenario.\", \"Some discussions on the following aspects could be valuable for the reader: (i) What would be a good suggestion to handle thin-structures? It seems to be a common issue among many SDF-like methods. (ii) The use of raw point sets is good, but such data usually come partially observed. Could this method support partial observations? If not, could there be workaround?\", \"The Chamfer distance and the variations thereon are obviously not well suited to assess the accuracy of the deep implicit representations. This creates an urge for better quantitative metrics, maybe the data driven ones. For the future, I would strongly suggest thinking about those to have more meaningful evaluation data.\", \"Some minor remarks:\", \"Can we already compare D and D' and give an intuition about what they might refer to at the place they are first defined?\", \"\\\"they strives to\\\" -> they strive to\", \"\\\"tested SALD ability\\\" -> tested SALD's ability\", \"\\\"the surfaces produces\\\" -> \\\"the surfaces produced\\\"\"], \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Good problem, weak motivation, and issues in experiment design.\", \"review\": \"This paper studies how to generate meshes from raw point clouds. In particular, this paper proposes a framework which is built on top of recent \\\"sign agnostic learning (SAL)\\\" work. Compared to SAL, this work adds a gradient penalty term, which encourages the derivative consistency. The problem studied in this paper is important, however, the proposed method is very incremental and has several motivation issues. I summarize the pros and con as follows.\", \"pros\": \"1. The idea of using gradient penalty to learn \\\"sharp\\\" signed distance function seems convincing. In Figure 4, the proposed method preserves sharp features compared to its counterpart SAL.\\n2. This paper presents a theoretic intuition why SALD works -- under uniform distribution assumption, SALD finds the global minimum.\", \"cons\": \"1. My biggest concern is the motivation to learn sign distance function from its unsigned observations. For data (ShapeNet and FAUST) used in this paper, signed distances are immediately available -- one can easily convert a mesh to its implicit representation. To me, learning signed distance function (as DeepSDF does) is more convincing since the direct supervision is available. So why does this method bother to learn the proxy objective (unsigned distance function)? \\n2. Following 1, the most obvious application of this paper would be learning signed distance function when the distances are not available -- the input is either LiDAR scan or depth image. In that case, if the paper can reconstruct realistic 3D models, it will be much stronger. \\n3. To some extent, this paper uses neural networks to learn sign priors from data. There are multiple existing works on this direction which this paper doesn't mention (or briefly mentions but doesn't compare to). E.g, \\\"Deep geometric prior for surface reconstruction\\\" and \\\"Point2Mesh: A Self-Prior for Deformable Meshes\\\". The paper should at least explain the differences of the tasks if it doesn't compare to them. \\n4. In the implementation detail, the paper says it uses a similar architecture to DeepSDF in the auto-decoding case. However, the method shows improvements over DeepSDF. This seems impossible given that DeepSDF learns from direct signed distance supervision. So I am wondering if this is due to model size difference. I'd like to see more comparisons to DeepSDF under exactly the same model capacity.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Good paper but needs some revisions for acceptance\", \"review\": \"## Summary of paper and contributions\\nSALD extends prior work on Sign Agnostic neural implicit shape representations to include a loss term on the derivative of the implicit function. The authors justify the benefits of derivatives in 2 ways: (a) By citing prior work [1] which shows empirically that derivatives decrease sample complexity of deep ReLU networks, and (b) By showing qualitative improvements over SAL without derivatives.\\n\\nThe authors show qualitative evidence that global minimizers of sign agnostic losses (with and without derivatives) satisfy the *minimal surface property*, a desirable property of solutions in commonly discussed the surface reconstruction literature. They demonstrate this property via 2D experiments and via a motivating theoretical example. \\nFinally, the authors show their loss function can be integrated into existing generative shape modelling pipelines, comparing results on ShapeNet and D-FAUST against DeepSDF which requires pre-computed SDF data, and SAL which can operate on raw inputs.\\n\\n## On the benefit of using derivatives\\nThe authors cite [1] to motivate the benefit of including derivative terms in the loss. In the case of deep ReLU networks such as the one used by the authors, this prior work shows an empirical reduction in sample complexity when regressing low dimensional functions (Section 4.1) motivated by a theoretical intuition (Section 3). While the neural implicit functions learned by SALD are indeed low dimensional, the shape-space learning problem is not: It learns a map from a point set (consisting of many points) or a high dimensional (256 in the SALD case) latent code to an implicit function. Given this, I don't believe the authors can simply claim a reduction in sample complexity by citing [1] without demonstrating further experimental evidence, especially given the fact that the experiments in the paper do not show SALD drastically improving over SAL.\\n\\nIn particular I would be more convinced by an experiment showing the degradation of SAL vs SALD as the number of available samples for a shape is decreased when (a) regressing a single shape directly from data (such as in IGR [2] Section 6), and (b) regressing a shape using an auto-decoder. \\n\\n## Minimal surface property\\nShowing that global minima to SAL may satisfy the minimal surface property is indeed quite interesting. I do feel however that the claim in the paper regarding this is a bit oversold. In particular \\\"We prove that SAL enjoys a minimal length property in 2D\\\" (Abstract) and \\\"Identifying and providing a theoretical justification for the minimal surface property of [sal].\\\" (end of Section 1). The minimal surface property is well known in the surface reconstruction literature (e.g. [3] cited by the authors in Section 3) and the theorem shown by the authors appears to be for a specific case in 2D unless I am missing something. While these results are not trivial, I feel the contribution should be rephrased to something along the lines of\\n\\\"We give empirical evidence and theoretical motivation that minimizers of SAL-type losses produce solutions satisfying the minimal surface property [citation]\\\"\\n\\n## Experimental Evidence\\nI feel like the choices of datasets and baselines are sufficient to show the effectiveness of SALD. There are two experiments however which I feel are missing from the paper:\\n 1. The sample complexity experiment described above.\\n 2. Some kind of performance evaluation. I imagine that computing losses on gradients of networks is quite expensive. How much is the increase in runtime compared to the gains in accuracy?\\n\\n## Summary of review\\nGeneralizing SAL to include derivative quantities is a natural next step for this line of work. The authors show that SALD improves performance over the state of the art on Shapenet and performs comparably on D-FAUST. While these results are great, I feel the paper is missing a few key experiments described above, and that the claims around the minimal surface property are a bit overblown. I am rating this paper as marginally below the acceptance threshold but am more than willing to increase my score if the authors make the requested revisions or give a strong justification as to why they are unnecessary in their rebuttal. \\n\\n## References\\n[1] Czarnecki et. al. - Sobolev Training for Neural Networks\\n[2] Gropp e.t. al. - Implicit Geometric Regularization for Learning Shapes\\n[3] Zhao et. al. - Fast surface reconstruction using the level set method\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
s0Chrsstpv2 | Better sampling in explanation methods can prevent dieselgate-like deception | [
"Domen Vreš",
"Marko Robnik Šikonja"
] | Machine learning models are used in many sensitive areas where besides predictive accuracy their comprehensibility is also important. Interpretability of prediction models is necessary to determine their biases and causes of errors, and is a necessary prerequisite for users' confidence. For complex state-of-the-art black-box models post-hoc model-independent explanation techniques are an established solution. Popular and effective techniques, such as IME, LIME, and SHAP, use perturbation of instance features to explain individual predictions. Recently, Slack et al. (2020) put their robustness into question by showing that their outcomes can be manipulated due to poor perturbation sampling employed. This weakness would allow dieselgate type cheating of owners of sensitive models who could deceive inspection and hide potentially unethical or illegal biases existing in their predictive models. This could undermine public trust in machine learning models and give rise to legal restrictions on their use.
We show that better sampling in these explanation methods prevents malicious manipulations. The proposed sampling uses data generators that learn the training set distribution and generate new perturbation instances much more similar to the training set. We show that the improved sampling increases the robustness of the LIME and SHAP, while previously untested method IME is already the most robust of all. | [
"Explaniable AI",
"explanation methods",
"robust explanations"
] | Reject | https://openreview.net/pdf?id=s0Chrsstpv2 | https://openreview.net/forum?id=s0Chrsstpv2 | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"qNKCWuPa7v",
"i3q7Ro74Glm",
"V7avAHpJ5gW",
"TJ9UgddT2wQ",
"eiC7pkRmqr6",
"sytKzhZ_Wd_",
"JZFhzaFmFMU",
"R85WAl7Ay39",
"vDQRQ1dDEot"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040413663,
1606205537386,
1606205458534,
1606205287903,
1606205196800,
1604056453077,
1604049869033,
1603910096337,
1603294511620
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3562/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3562/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3562/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3562/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3562/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3562/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3562/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3562/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"The overall impression on the paper is rather positive, however, even after rebuttal, it still seem that the paper requires further work and definitely a second review round before being ready for publication. Thus, I encourage the authors to continue with the work started during the rebuttal to address the reviewers' comment, which although moved in the right direction would still benefit from further work. Especially, I believe the experiments could be significantly improved (by for example bringing some results to the main paper). Moreover, a more thorough comparison theoretically and empirically with previous work would increase the impact of the paper.\"}",
"{\"title\": \"Thank you for your suggestions, paper updated\", \"comment\": \"Thank you for careful reading and suggestions which helped us to improve the work. We have added some additional explanations to the text and appendices of the paper.\\n\\nWe added a new set of experiments, described in Section 4.4 and Appendix G, which addresses different levels of attackers\\u2019 conservatism. The results show that even with different thresholds, gIME is still the most robust from the three explanation methods and treeEnsemble still gives the best results as the data generator. While the percentage of the instances, on which the biased behavior of the adversarial model is recognized, drops with higher values of the decision thresholds, it still remains high enough to warn the regulator about the problematic behavior of the prediction model (especially in the case of gSHAP and gIME using treeEnsemble as data generator). \\n\\nWe checked how changing the certainty threshold of the decision model affects the behaviour of the adversarial model and the robustness of explanation methods. We used five different thresholds on COMPAS dataset in this experiment reported in Section 4.3 and Appendix G. The results show that gSHAP and gIME (and also gLIME, if adversarial model does not use treeEnsemble in training) are still robust enough to warn the controller about potential issues with that adversarial model.\\nWe checked if modified generators gIME, gLIME, and gSHAP change the obtained explanation scores. We used three dataset and five classifiers in this experiment reported in Appendix F. The results show negligible difference in explanations produced by IME and SHAP, and larger differences between LIME and gLIME.\\n\\nWhile we did no sensitivity analysis for the modified gIME method, this was done for the original IME method by \\u0160trumbelj and Kononenko in 2010 JMLR paper (https://www.jmlr.org/papers/volume11/strumbelj10a/strumbelj10a.pdf). How to determine the correct number of samples to limit the error of the estimated Shapley values is described in section 3.2.1 of that paper (we added the reference to that paper to text in Appendix C). We used that method to determine the number of samples in our IME convergence rate experiment. The number of samples should be determined in the same way in both the adversary and non-adversary environment as it depends only on the variance of the samples.\"}",
"{\"title\": \"Thank you for your feedback, some additional explanations added to the paper\", \"comment\": \"Thank you for careful reading and suggested references which we help us to further improve the work. We have added some additional explanations to the text and appendices of the paper.\\n\\nThe mentioned ArXiv paper (Frye et al, 2020, Shapley explainability on the data manifold) was not available at the time of ICLR submission and we could not know about it. Further, this paper suggests a different explanation method which may or may not be more robust than the existing explanation methods (robustness was not analysed, nor adversarial attacks were prevented).\\nOur paper uses the term robustness in a sense of prevention of adversarial attacks and not in a sense defined in the mentioned paper (Alvarez-Melis and Jaakkola, 2018, On the Robustness of Interpretability Methods). We now explain our use of robustness in the introduction. \\n\\nConcerning robustness of modified generators, we checked if modified generators gIME, gLIME, and gSHAP change the obtained explanation scores. We used three dataset and five classifiers in this experiment reported in Section 4.3 and Appendix F. The results show negligible difference in explanations produced by IME and SHAP, and larger differences between LIME and gLIME.\\nThe analysed perturbation-based explanation methods are standard, broadly used tools in machine learning with many uses. Improvements we propose, make them more robust. Using the training set directly to generate explanations as proposed by (Frye et al, 2020) is still open to thorough investigation and the test of time.\"}",
"{\"title\": \"Thank you for your feedback, paper updated\", \"comment\": \"Thank you for careful reading and informative comments which we will use to further improve our work. We have added some additional explanations to the text and appendices of the paper.\\n\\nConcerning the robustness results, we included Table 3 to Appendix E which includes the same information as the heatmap in Figure 2. The heatmap serves better to get a quick overview, while the tabular form gives better and more detailed information. Due to the limited space we initially included only the heatmap.\\nWe added a detailed description of the way how discriminators are trained to Appendix D which now includes pseudocode of three algorithms.\\n\\nWe checked if modified generators gIME, gLIME, and gSHAP change the obtained explanation scores. We used three dataset and five classifiers in this experiment reported in Section 4.3 and Appendix F. The results show negligible difference in explanations produced by IME and SHAP, and larger differences between LIME and gLIME.\\n\\nWe improved Section 4.2, where we now explain better how the deception is simulated. In all cases, the biased model b (see Section 2.2) was a simple function depending only on the value of the sensitive feature. The unbiased model Psi depended only on the values of unrelated random features. For unbiased models the sensitive feature was therefore never the most important one. If explanation method did not recognizing the sensitive feature, we consider the deception successful.\", \"being_aware_that_successful_defence_to_adversarial_attacks_to_explanations_requires_access_to_at_least_part_of_the_training_set_is_an_important_results_of_this_paper_which_we_stated_in_the_conclusions\": \"\\u201cInspecting authorities shall be aware of the need for good data generators and make the access to training data of sensitive prediction models a legal requirement. Luckily, even a few non-deceived instances would be enough to raise an alarm about the unethical model.\\u201d\"}",
"{\"title\": \"Thank you for your positive attitude\", \"comment\": \"Thank you for careful reading and positive attitude to our work. We have added some additional explanations to the text and appendices of the paper.\\n\\nThe used generators were not systematically tested for images, though there are indications that they perform reasonably well. In particular, Miok et al (2019) demonstrate their MCD VAE generator on MNIST dataset. However, there is little work using IME, LIME, or SHAP in image classification. It seems that images require specialized explanation approaches and these are mostly merged with recent neural image classifiers. We see this as an opportunity for further work.\\nWe checked if modified generators gIME, gLIME, and gSHAP change the obtained explanation scores. We used three dataset and five classifiers in this experiment reported in Section 4.3 and Appendix F. The results show negligible difference in explanations produced by IME and SHAP, and larger differences between LIME and gLIME.\\n\\nIn our experiments, the instance sensitive variant of the TreeEnsemble generator, named TensFill, worked best but we think this aspect has to be further tested using more datasets with different characteristics. E.g., for images, we presume that the MCD VAE generator or some GAN-based generator could be more successful.\"}",
"{\"title\": \"Good comparison of data generators for adversarially robust explanations\", \"review\": \"Summary\\n-------------\\nFollowing the work of Slack et al (2020), which presents adversarial attacks on explanations, this work proposes a solution, that is to use improved perturbation data generators that produce instances more similar to samples in the training set. \\nThis work also shows that the IME method is more resilient to adversarial attacks in comparison to LIME & SHAP, while both LIME and SHAP would benefit from the proposed data generators. \\n\\n+ves: \\n-------\\n- Overall, the solution to use improved data generators that closely match the training data distribution is a good one including the comparison between the different data generators. The result on the robustness of IME method is good. \\n- Authors have submitted modified version of the code i.e. gLIME & gSHAP which use the proposed improved data generators. Both the code and the result on IME is expected to benefit the AI explainability community and practitioners that more or less rely on either LIME or SHAP today. \\n\\nPossible improvements\\n--------------------------------\\n- It would be good to comment on how the data generators work on images. \\n- Using training data distribution may perhaps improve the overall quality of explanations as well i.e. beyond making them robust to adversarial attacks, it might be good to discuss any such benefits in the paper by considering explainability metrics such as monotonicity, faithfulness, etc. \\n- One thing which is unclear is author's recommendation on which data generators ( among the 3 evaluated ) to eventually use - what are their pros/cons. Does this depend on the type / distribution of data or explainability method or both.\", \"conclusion\": \"---------------\\nOverall, this is a nice piece of work which leverages existing data generators to show that adversarial robustness of LIME & SHAP based explanations can be improved.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review\", \"review\": \"This paper focuses on the adversarial scenario presented in Slack 2020 \\\"Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods\\\": an adversarial entity can design a model with obvious biases that will look innocuous to regulators when analyzed by post-hoc explanation methods such as LIME, SHAP, etc. This is achieved by leveraging the idea that the perturbations used by LIME, SHAP and other methods follow a different distribution than the original data, and therefore the adversary can learn how to distinguish the perturbed samples from the real ones and then run an unbiased version of the model when it is being probed.\\n\\nThis paper looks at this scenario from the eyes of the regulator that has to probe the model to decide whether it is biased. The main proposal of the paper is to alter the way in which LIME, SHAP, etc generate the perturbations needed to compute the explanation. In particular, this paper proposes to use perturbations that closely follow the data distribution, making it harder for the adversary to distinguish between genuine samples (that should go through the biased model) and perturbed samples (that should go through the unbiased model, as they imply that the model is being probed.)\\n\\nAlthough I liked the idea exposed in the paper and enjoyed reading the background and related work, the experimental section and the conclusions interpreted from results seem a bit preliminary. \\n\\nExperimentation is not very thorough, covering only robustness of the proposed sampling when pairing different generators and discriminators. The only quantitative results are provided through Figure 2 and are color coded, making them hard to compare. A table would have likely been a better way of presenting these results. More details on how the discriminator d in (eq(1)) is designed and trained would also have been of interest, particularly since different discriminators could have been evaluated.\", \"additional_comments_and_questions\": [\"Figure 2, the use of green and red colors is inconsistent between what is described in text and figure. Text says \\\"The green colour means that the explanation method was deceived in less than 30% of cases, and the red means that it was deceived in more than 70% of cases\\\" but figure legend has 0 to 0.5 being red and 0.5 to 1 being green.\", \"One underlying assumption is that changing the perturbation used by the explanation method will not hinder the validity of the explanations. Yet, of course, explanation methods are sensitive to how the perturbations are created (trivially, one could use Gaussian noise with a very large variance to create perturbations that are not useful to generate good explanations). The paper focuses on the impact on the robustness to attacks, but more discussion and empirical results about the impact on explainability of the original method would be required.\", \"From section 4.2, \\\"We consider deception successful if the sensitive feature was not recognized as the most important by the explanation method\\\". Does that mean that deception can't be successful for samples where the sensitive feature is the most important feature on the unbiased model? Or are these features removed in the unbiased model? It is not completely clear from the explanation.\", \"How is the discriminator d (eq(1)) defined/trained? I could not find this information in the paper.\", \"Learning of the perturbation requires access to copious amounts of data from the real distribution, which may not actually be accessible to the regulator, rendering some/all of the defenses ineffective.\", \"\\\"Dieselgate\\\" is not common term?\", \"Overall, despite the interesting idea, the paper looks to be in a preliminary state.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"The paper suggests to replace the perturbations part for the existing post-hoc explanation methods like LIME and SHAP with on-data manifold sampling methods.\", \"review\": \"ICLR 2021 Review - Better Sampling in Explanation Dieselgate\", \"summary\": \"The paper suggests to replace the perturbations part for the existing post-hoc explanation methods like LIME and SHAP with on-data manifold sampling methods.\\n\\nSHAP and LIME use perturbations or randomly generated points to explain the decision of the black-box models. These points are out-of-distribution data, that leads to a new avenue for adversarial behavior discussed in Slack et al. (2020). The authors use existing data generators to produce better perturbations. They further empirically evaluate the robustness of explanations generated after proposed changes on real-life datasets.\", \"comments\": \"1. It is not clear what exactly the contribution of the paper. The problem is identified by existing papers [slack et al (2020),\\u00a0https://arxiv.org/pdf/2007.09969.pdf], etc, and mentioned that such attacks fail trivially if perturbations are from data distributions. The data generators are used from the existing literature. A recent paper [https://arxiv.org/pdf/2006.01272.pdf] proposes more efficient and theoretically sound on-data manifold SHAP computations.\\u00a0\\u00a0\\n2. The definition of robustness is not formally stated in the paper. The usual robustness in explanations [https://arxiv.org/pdf/1806.08049.pdf] bounded/negligible change in the explanation if the point of interest it changed slightly. It is not clear how random perturbations around the point of interest affect robustness.\\n3. The evaluations in the paper are weak, it is trivial that if perturbations are from data distributions the attack proposed in Slack et. al (2020) will fail (it is discussed in Slack et. al (2020) as well). Moreover, the paper does not evaluate the effects of used sampling methods in explanation. The data generating model itself a black-box model and involves more uncertainties in explanations. Minor: why one can\\u2019t use the training dataset itself to generate model explanations rather than using black-box data generators?\\u00a0\\u00a0\\n\\u00a0\\n\\n**After Rebuttal** \\n\\nI would like to thank the authors for their rebuttal. I agree that it is not fair to assess the merits of the current work based on papers that were not available at the time of submission (or that, strictly speaking, have not been published at the time of submission). Indeed, to an extent, pointing out the ArXiv paper encourages authors to simply submit their works there to get a \\\"publication\\\" stamp, which on a community level is undesirable (papers on ArXiv aren't reviewed and citing them as scientific sources is problematic to say the least). I suppose the only point is that there exist works that do similar things in a more compelling fashion. \\n\\nIt's encouraging to see that the authors checked for robustness of their method, and I appreciate the efforts. \\n\\nWith these two issues resolved to a certain extent, I am willing to increase my score.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Nice use case for data generators, requires more analysis to contribute to explanation literature\", \"review\": \"Summary: This paper proposes a defense against the adversarial attacks on explanation methods described in Slack et al. (2020). In particular, by using sampling methods that more closely resemble the original data distribution, the authors make it difficult for the Out-of-Distribution detector to successfully discriminate between instances used for predicting and instances used for explaining.\", \"positives\": \"The paper represents a nice use case for this suite of data generators. The necessary background information is explained well, and the testing is thorough with respect to comparing robustness across the data generators\", \"negatives\": \"Firstly, I'm concerned that the evaluation metric you use does not fully capture the nature of the problem. In particular, it is not clear that in testing you ensure that the biased classifier is deployed on all testing instances. The % of the time that the sensitive variable appears in the top position will also depend on how aggressively the biased classifier is used. When testing/explaining points are separable, this is less of an issue, as the biased explainer will be used on all (or almost all) testing points. However, the purpose of your method is to make testing/explaining points more difficult to distinguish. In this case we might imagine that the biased explainer is being used more conservatively - that is, if the adversarial classifier cannot distinguish between testing points and sampling points and therefore deploys the unbiased classifier on some testing points, the sensitive attribute may correctly go undetected. The true metric of interest should relate to the % time the sensitive variable is identified when it is being used, at different levels of conservatism.\\n\\nThe authors of LIME, SHAP, and IME make careful design choices for reasons of 1) breaking data correlations, 2) ensuring the satisfaction of certain axioms, and 3) run-time. It would be important to relate this new methodology to its predecessors along these lines as well. In particular, sampling only from the manifold of the original training data can be expected to maintain the correlation structure of the original data. This is useful for fooling the adversarial classifier but you may sacrifice an ability to differentiate between the model's usage of correlated features.\\n\\nI'd like to see some sensitivity analysis to the number of samples. You point out that IME is \\\"already quite robust\\\" but certainly this seems counterintuitive at small sample sizes. At which sample size does this become true?\", \"related_work\": \"Saito et al. address this problem concurrently https://arxiv.org/pdf/2006.12302.pdf\\n\\nAll in all, I find the work to be a useful step forward, but believe that it would benefit from more thorough analysis before publication.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
TaYhv-q1Xit | Ringing ReLUs: Harmonic Distortion Analysis of Nonlinear Feedforward Networks | [
"Christian H.X. Ali Mehmeti-Göpel",
"David Hartmann",
"Michael Wand"
] | In this paper, we apply harmonic distortion analysis to understand the effect of nonlinearities in the spectral domain. Each nonlinear layer creates higher-frequency harmonics, which we call "blueshift", whose magnitude increases with network depth, thereby increasing the “roughness” of the output landscape. Unlike differential models (such as vanishing gradients, sharpness), this provides a more global view of how network architectures behave across larger areas of their parameter domain. For example, the model predicts that residual connections are able to counter the effect by dampening corresponding higher frequency modes. We empirically verify the connection between blueshift and architectural choices, and provide evidence for a connection with trainability. | [
"deep learning theory",
"loss landscape",
"harmonic distortion analysis",
"network trainability"
] | Accept (Poster) | https://openreview.net/pdf?id=TaYhv-q1Xit | https://openreview.net/forum?id=TaYhv-q1Xit | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"jk4zX-LIT0",
"D3PTEp4KaNX",
"L2Nh0o2ZMnl",
"ge0DepAaOUv",
"LGINz8mHHyx",
"Xbp4iesV5py",
"gnYaick3TP",
"iWD_OwvNTC1",
"PDQrTQJAIRf",
"U9OXDu9-nAB",
"38RJoz66etD",
"52APBAuFo49"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040426856,
1605612604460,
1605611355107,
1605611338692,
1605610418147,
1605610381858,
1605610311526,
1605608645522,
1604056589878,
1604001812927,
1603988216656,
1603895854348
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3561/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3561/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3561/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3561/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3561/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3561/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3561/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3561/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3561/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3561/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3561/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Poster)\", \"comment\": \"The paper presents an analysis of the spectral impact of non-linearities in a neural network, using harmonic distortion analysis as a means to quantify the effect they have in the spectral domain, linking a blue-shift phenomenon to architectural choices. This is an interesting analysis, that could be strengthened by a more thorough exploration of how this analysis relates to other properties, such as generalization, as well as through the impact of the blueshift effect through the training process.\"}",
"{\"title\": \"Author Response 2\", \"comment\": \"We would like to thank Reviewer 4 for the the constructive feedback. We would like to address the interesting questions brought up in the review. We will also address these issues in the revised paper has been uploaded.\\n\\n*Q: Current presentation mainly focus on explaining existing networks via blueshift measure. It does not \\\"predict\\\" new choices of the nonlinearity and architectures. This prediction could be related with the data: find the one nonlinearity and architectures that could best fit the data complexity. *\", \"a\": \"We perform the experiments near the initialization point as the initial behavior of the function appears to have particular impact on the results (see \\u201cCritical Learning Periods in Deep Neural Networks\\u201d, Achille et al.). The paper includes experiments during and after training (Fig. 10), as well as measurements in gradient directions only (Fig. 11) in the appendix. The experiments show that the blueshift effects becomes weaker during training but still remains present. See also the corresponding answer to Rev. 2.\\n\\nThe results are more global than a differential analysis that looks only at a gradient at single points. The Fourier view shows the norm of the gradient function over a finite-length path (or averages of many such path). It is true (and important to stress) that the view is not fully global - the domain of the network is unbounded and infinite, with common nonlinearities saturating in the outer regions. Therefore a spatial restriction is required (our experiments sample path starting from the current state of the network, initialization or later [appendix], in random directions, but keep the path length short to remain representative of that region).\\n\\nWe agree that this could be discussed more clearly; we will add qualifications to the revised version.\"}",
"{\"title\": \"Author Response 2 2/2\", \"comment\": \"*Q: What is the motivation for only focusing on networks at initialization? I would have loved to have seen what a pertained network looks like.*\", \"a\": \"Ensemble models are explained in our theory by the FDSA (Frequency dependent signal averaging) effect. In an ensemble, we would expect the loss surfaces of the sub-networks to mostly share their coarse structure (low frequencies), while presenting differences mostly in the details (high frequencies). By consequence, the mostly uncorrelated high frequencies average out and lose relative weight and the resulting ensemble loss surface is smoother than the individual loss surfaces.\"}",
"{\"title\": \"Author Response 2 1/2\", \"comment\": \"We would like to thank Reviewer 2 for the constructive and very detailed feedback. We have uploaded a revised version of the paper that addresses the issues raised in the review, including spelling mistakes and ambiguous formulations. For the technical questions, we would like to provide answers and further explanations below:\\n\\n*Q: a good paper to cite would be \\u201cAvoiding Pathologies in Very Deep Networks\\u201d (Duvenaud et al., 2014)*\", \"a\": \"This is an important point (we will add a clarification in the revision). Combinatorially, a ResNet has a 50-50-weighting for each block visited, therefore halving the average path length. However, the weight of these paths is not 50-50, as stacks of non-bypass-networks suffer from vanishing gradients (Veit et al., \\u201cResidual Networks Behave Like Ensembles of Relatively Shallow Networks\\u201d). At initialization, before running any batch-normalization, this seems to be a consequence of the uneven spectrum of Gaussian matrices which are uncorrelated across multiple layers (therefore attenuating in all directions when stacked). Empirically, the dampening is maintained after training (Veit et al.). When taking the dampening effect into account, the actual weight of the residual blocks become smaller and thus the contribution to the output too (this leads to the actual exponential downweighting).\"}",
"{\"title\": \"Author Response 3 3/3\", \"comment\": \"*Q: There is an attempt to connect exploding gradients to blueshifting. However, this is not entirely clear to me. Indeed, one can say that simultaneously we have blueshifting in the gradient and at the same time exploding gradients. Does this mean that one cause the other, however? Could n't one have explosions by having disproportionately large low order frequencies (not that it is the case, just wondering)? Or some other phenomenon.*\", \"a\": \"We use Leaky ReLU to control the amount of nonlinearity in the network ( \\\"more linear\\\" for alpha -> 1). If we approximate the leaky ReLU function by polynomials (using Chebyshef fits of degree 20) for varying alpha (see Figure 18d/e in the appendix), we can see how this leads to a slower drop-off the closer we get to ReLu from linear.\", \"q\": \"What does it mean 'making the networking more linear'? Do you mean increasing the \\\\alpha hyperparameter till it becomes 1, in which case you have a linear function?*\"}",
"{\"title\": \"Author Response 3 2/3\", \"comment\": \"*Q: I find figure 1 a bit perplexing. Again, I understand what is the message, but it is hard for me to connect it to the theory, since the theory makes only indirect references to the specific nonlinearities. Also, what is 'ReLU->ResNet' supposed to stand for? ResNet is a ReLU when including skip connections? And what is 'Linear->ReLU'? To put it otherwise, adding skip connections or a ReLU nonlinearity are discrete design choices. However, the figure has continuous axis. So, what exactly is illustrated? The 'vertical' axis corresponds to the t variable in the Fourier coefficient. What about the other axis?*\", \"a\": \"We have updated section 3.4 to clarify that blueshift generally causes exploding gradients (blueshift denotes harmonic creation at non-linearities; thus, applications of nonlinearities will increase frequency and thus gradient magnitude, see below). It is worth noting that the model predicts that the the *lowest* layers (close to the input) will have the largest gradient magnitude increase. Gradient magnitude shrinks with increasing layer index. Relatively speaking, one could also call this effect vanishing gradients.\"}",
"{\"title\": \"Author Response 3 1/3\", \"comment\": \"We would like to thank Reviewer 3 for the great amount of helpful feedback. A revised version of the paper that fixes typos and figure misplacements and clarifies the discussion has been uploaded. We would now like to answer the open technical questions:\\n\\n*Q: [...] However, what exactly is the message? That we should have only low frequencies? Or that we should have some high frequencies? [...]*\", \"a\": \"Our current theoretical analysis only holds for polynomial nonlinearities. It shows that larger non-linearity in the sense of larger higher-order polynomial coefficients lead to more blueshift. The Stone-Weierstrass theorem would of course permit an approximate of continuous function (as output by all relevant nonlinearities) as closely as desired by a polynomial, in order to consider a wider range of functions. However, the effect on the spectrum still remains harder to formally establish, as we have to consider two limit processes which cannot trivially be exchanged. While we are not able to formally prove this, it is still reasonable to conjecture that a tight approximation with finite degree (Figure 17) is sufficient to predict blueshifts of non-polynomial nonlinearities. We verify this experimentally by relating the drop-off rate of the polynomials obtained (Figure 18) with measured blueshift (Figure 16), which yields qualitatively correct results. We use a least-squares Chebyshev approximation for this experiment because it is easy to compute and (reasonably) stable for larger degrees (unlike, for example, equidistant point-wise fits, which oscillate, or Taylor expansions, which might not convergence).\"}",
"{\"title\": \"Author Response 1\", \"comment\": \"We would like to thank Reviewer 1 for the constructive feedback. We have uploaded a revised version of the paper with improved figure placement and that fixes the misnomers pointed out. We will also update Figure 6 which was admittedly botched for a lack of space to include v2 blocks as well. Concerning the overall goal, we agree with the assessment: Indeed, the intention of this paper is to gain a new perspective on already known phenomena, namely the impact of the choice of the nonlinearity and architecture on the degradation of the output surface with regard to network depth.\\n\\n*Q: Are the given plots achieved by averaging over multiple runs?*\", \"a\": \"Only the training plots contain multiple runs (5 runs for Figure 8 and 1 run per cell in Figure 6) as the blueshift plots are already averaged over many neurons/paths/initializations. We clarified this in the revised version of the paper.\"}",
"{\"title\": \"Hanrmonic distorsion in deep neural networks\", \"review\": \"The papers proposes an interesting analysis that links several aspects of architectural design in Deep NNs to the spectral analysis and observed roughness. Different activations functions are considered in the study, mainly centered on deep CNN with or without skip connections (in the framework of ResNet v1 and v2). The starting point, which is not novel, actually, but relevant, is that specific types of non-linearities introduce harmonic distortions, and the effect is potentially amplified when multiple non-linearities are stacked. Theoretically, the paper shows that there is a concrete link between architectural choices in the network design and the blueshift in the frequency domain. Experimentally, the observations support the mathematical analysis. All in all, some of the conclusions regarding trainability of CNN architectures with skip connections have been already noted and do not seem greatly new, but the paper introduces a nice perspective to see this phenomenon in another light.\\nThe paper is generally well written and I appreciated reading it.\\n\\nThe major downside I see in the current form of the manuscript is given by some aspects of the presentation. For instance, Fig. 1 is clearly misplaced (it should be in Section 4.4). Similarly, Figure 6 should be in Section 5. Moreover, abbreviations would be better used in a more uniform manner (e.g., SDFA, FDSA, SDSA). Regarding the reported experiments: are the given plots achieved by averaging over multiple runs? (only in one of the many experimental settings this information is given in the paper). Finally, the link between the left and the right sides in Figure 6 is not really straightforward, perhaps grouping together the short and the noshort results could be of help for the reader.\\n\\n-- EDIT: \\nThanks for the nice feedback during the rebuttal. I am happy to stay with my rating of clear acceptance.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Official review\", \"review\": \"Summary: This paper proposes a new approach for how to analyze the ruggedness of the surface of the neural network loss. Specifically, the paper proposes to apply harmonic distortion on the weight-to-output (w-o) maps. That is, the method casts the w-o functions in the Fourier domain and then aggregate the surface characteristics by virtue of averaging the different order Fourier coefficients. The paper shows that non-linearities are responsible for blueshifting with deeper layers, that is for \\\"transferring more energy\\\" on the higher frequencies. The consequence is rougher surfaces, as well as higher frequencies for gradients, which can lead to exploding gradients in the deeper layers. The remedy is with skip connections and feature averaging, which although are methods already known to improve trainability, the paper corroborates that they also make sense in terms of said approach. The paper conducts various empirical and ablation studies, providing evidence of the claims.\", \"the_strengths_of_the_paper\": [\"I believe that at the core the paper offers some really nice preliminary ideas and intuitions. It is very intuitive that deeper layers generate higher frequencies to the loss and thus can make the optimization harder. On the other hand, higher frequencies are necessary for expressivity. This is another instantiation of the bias-variance tradeoff. By finding ways to balance the relative strength of high and low frequencies, one could throttle how much expressivity is necessary for the task at hand.\", \"The paper goes to great lengths to support some of the claims empirically. There is lots of different experiments and it seems the claims are overall supported by the findings. Different nonlinaerities and architectures (maybe too much, in that there is less focus) are explored, which is admirable.\", \"I like the quality of the visualizations. It is clear that the authors have spent quite some time in generating their figures.\"], \"the_weaknesses_of_the_paper\": [\"Sure, if we focus only on the low-order frequencies trainability is better. Also, I dare say that the insight is not exactly surprising although very intriguing. However, what exactly is the message? That we should have only low frequencies? Or that we should have some high frequencies? That skip connections are good for better training? I believe that in many ways, the message is incomplete if one leaves out expressivity and it would be nice to extend the theory to say something about the potency of the neural network on learning patterns and generalizing. The authors already comment on this in the 'future work' lines. I think that this should become current work, otherwise the work is incomplete, at least from the current perspective.\", \"I have the feeling there are places where the analysis is imprecise, although it could be that I also misunderstood.\", \"For one, the crux of the analysis is that the neural network nonlinearities are expressed in Fourier series (sec 3.2). Then, in the next section 3.3 the paper says that in practice nonlinearities are not polynomial and might not have a convergent Taylor expansion. So, instead a Chebyshev approximation is opted for. However, it is not clear if the Chebyshev approximation suffices or what are the limits of it? but I think this must be elaborated further.\", \"Also, what are these Chebyshev approximations per nonlinearity? I think it is quite important to clarify this, considering there are nonlinearities that all but very similar, e.g., the ReLU and the leaky ReLU. What is the big difference between the two in terms of the described analysis?\", \"I find it hard to understand often what the analysis tries to say, either the analysis is incomplete, the writing generally unclear or I simply don't understand some details. I list my comments by order of reading (not importance).\", \"Throughout the paper there is a clear desire to connect rougness with layer depth. However, in all equations and analysis the depth is not explicitly present. For instance, in equations 4-6 there is only the degree of the polynomial K, but no layer variable or index. From what I gather, the (implicit) argument is that by the successive stacking of layers, the corresponding low/high order frequencies get stronger or weaker, relatively. Then, the objective is to compare the corresponding low and high frequencies for different layers, showing that for deeper layers the higher frequencies get stronger because of the recursion. This is how depth is 'qualitatively' introduced as a variable. Is this indeed the intention? If yes, I think it can be written more explicitly.\", \"I find figure 1 a bit perplexing. Again, I understand what is the message, but it is hard for me to connect it to the theory, since the theory makes only indirect references to the specific nonlinearities. Also, what is 'ReLU->ResNet' supposed to stand for? ResNet is a ReLU when including skip connections? And what is 'Linear->ReLU'? To put it otherwise, adding skip connections or a ReLU nonlinearity are discrete design choices. However, the figure has continuous axis. So, what exactly is illustrated? The 'vertical' axis corresponds to the t variable in the Fourier coefficient. What about the other axis?\", \"The related work points to Li et al and their spectral analysis to ground the proposed research. However, it is not explained what these observations are and how they relate to the current paper. It would be nice for the reader to add a short explanation.\", \"Do we expect a difference by considering 1D slides, instead of 2D slides as motivated by Li et al? Why yes, why no?\", \"It is not explained why are the mean path are empirically zero-functions. I infer that this is the case because at any location of the loss surface, if we take a small ball around it there will be an equal amount of parameters for which there is a higher or lower loss value? However, wouldn't this imply already a strong gradient (about 1, if I am not mistaken)?\", \"What I find a bit confusing is that in equation 4 and 5 we apply the nonlinearity \\\\phi on p(t). However, p(t) are the 1-D slices of our neural network f defined in the preamble of section 3. I would assume that the nonlinearities would then already be inside p(t). In fact, in the preamble of 3 there is also a mention of \\\\phi and how p(t) is a polynomial when \\\\phi is the identity function. Maybe I have misunderstood something here.\", \"There is an attempt to connect exploding gradients to blueshifting. However, this is not entirely clear to me. Indeed, one can say that simultaneously we have blueshifting in the gradient and at the same time exploding gradients. Does this mean that one cause the other, however? Couldn't one have explosions by having disproportionately large low order frequencies (not that it is the case, just wondering)? Or some other phenomenon.\", \"There is a connection to exploding gradients, however, in deep networks vanishing gradients also important (maybe more so). Can the analysis address also vanishing gradients?\", \"It is not exactly clear why the frequency dependent signal averaging is wekaner than exponetial downweights. The explanation is very brief and a bit vague (law of large numbers, exponential decay). Is there a more precise qualitative or quantitative argument here?\", \"Can one still call the method as more 'global' in saying something about roughness, given that all coefficients are computed per layer? Of course, each layer's coefficient are influenced by all previous layers, but is this enough to paint the method 'global'?\", \"Is somehow K (polynomial order) connected to L (number of layers)? Or is this relation the way I described above?\", \"Perhaps relevant to the previous point, and taking the position of the devil's advocate, in a way what is put forward by the paper is a re-interpretation of existing knowledge. While certainly very intriguing, is there a new insight on trainability for a new type of method/technique that can improve trainability? What about wide layers and them being easier/harder to train?\", \"The text in p. 6 on Fig 5 (In figure 5, we use the power law ... ) is unclear to me.\", \"In p. 7 there is a great misalignment between the figure references and the figure locations in the paper.\", \"How is leaky ReLU connected to other nonlinearities? How precisely does it make a difference? How difference is the Chebyshev polynomial?\", \"What does it mean 'making the networking more linear'? Do you mean increasing the \\\\alpha hyperparameter till it becomes 1, in which case you have a linear function?\", \"In general, I find the paper quite interesting and with valuable potential contributions, but incomplete and not ready for publication at this stage. I believe it would worth it if the authors took the time to revisit the crispness of the message as well as the writing. Of course, I am more than happy to revisit my recommendation if the authors produce a convincing argument.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"An interesting measure and view of the deep neural network\", \"review\": \"Summary: The paper applies harmonic distortion analysis to understand the effect of nonlinearities in the spectral domain. This gives global view of the network output roughness.\", \"strong_points\": \"The paper introduces an interesting measure \\\"roughness\\\" of deep neural network via harmonic distortion analysis. It evaluates the blueshift near the initialization for various nonlinearity (ReLU, LReLU, TanH, Sigmoid,\\u2026) and architecture choices (no skip, skip, depth, width). The blueshift fits people's intuition of the architectural choices.\", \"weak_points\": \"1. Current presentation mainly focus on explaining existing networks via blueshift measure. It does not \\\"predict\\\" new choices of the nonlinearity and architectures. This prediction could be related with the data: find the one nonlinearity and architectures that could best fit the data complexity. \\n2. The hypothesis that spectral blueshift impedes training fit the observations that architectural choices with good harmonics generation control are easier to train to good performance. However this hypothesis is not consistent with the Sigmoid +NS case in Figure 16. I suppose it is hard to optimize Sigmoid+NS for a deep network (50 layers). \\n3. The paper claims this can give a global view of the roughness. But the measurements are near the initialization. In this sense, it would be better to look at the roughness measure out of the initialization neighborhood.\\n\\nI would not recommend the acceptance for now because of the above weak points.\\n\\nAfter the rebuttal\\n\\nI thank the authors for the detailed feedback. I am a bit with AnonReviewer3 about the concerns on the descriptive languages of the paper \\\"less nonlinear\\\", \\\"more expressiveness\\\". Moreover, the behaviors of different nonlinearities in the Fig.3 and Fig.17 are related to the specific initialization and batch normalization, which is rather not a global view of the landscape. I would keep the score unchanged.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Initial review\", \"review\": \"***Summary***\\n\\nI would firstly like to thank the authors for an interesting read. I enjoyed going through the submission very much.\\n\\nThe authors propose to understand the qualitative effects of nonlinearities by studying the impact they have on the Fourier spectrum of deep neural networks. The central hypothesis is that nonlinearities with a lot of energy in their side lobes (high frequencies), lead to neural networks that have a rougher mapping and that are consequently tougher to train because the derivative landscape is also rougher. They back this hypothesis up with some mathematical arguments from the area of harmonic distortion analysis and with empirical experiments to support the qualitative predictions of this theory.\\n\\n***Pros***\\n\\nI found the submission very readable. I think the balance of text to mathematics in the main submission was about right, reserving the appendix for a more in depth discussion.\\n\\nI think that while the central finding that deep mappings are smoother is, in itself, not particularly novel, the chain of reasoning to get to this fact is new. I like the use of the Fourier spectrum to show this and the analysis behind how the spectra of various nonlinearities affect overall network smoothness..\\n\\nThe choice of experiments, which sequentially back up the claims, makes for a good paper. I particularly enjoyed the results in Figure 2, which were very instructive and gave good insight into the predictions of the theory.\\n\\n\\n***Cons and constructive feedback***\\n\\nIn order from start to finish.\\n\\nIn the abstract should differential be differentiable?\\n\\nI think a good paper to cite would be \\u201cAvoiding Pathologies in Very Deep Networks\\u201d (Duvenaud et al., 2014) who analyze deep kernels in Gaussian processes. While the underlying models are different, the kinds of qualitative results in this paper are very similar to the submission.\\n\\nI am concerned about the use of the Fourier spectrum to model the ReLU nonlinearity. Will there not be issues with the Gibb\\u2019s phenomenon? The discontinuous gradient will mean that a spectrum exists, but reconstructions are poor.\", \"paragraph_below_equation_3\": \"uniformely -> uniformly\", \"equation_4\": \"using t_j is confusing given that you use t in eqn 1. Please change to another symbol\", \"eqn_6\": \"Please define z versus z_j\\n\\nSection 3.2 discussion: I would assume that while higher order autocorrelations would broaden the spectrum they would also smooth it out. For high orders it would like Gaussian-like in shape. This would not necessarily lead to blue-shifting.\\n\\nSection 3.3: therfore -> therefore\\n\\nSection 3.4 trivial -> trivially\\n\\nSection 3.5: Exponential downweighting. ResNets have combinatorially more medium length paths than short or long ones. So the average weight of a medium path is far higher than short or long ones. I would have liked to have seen a deeper analysis of this effect.\", \"experiments\": \"I found these very interesting. What is the motivation for only focussing on networks at initialization? I would have loved to have seen what a pertained network looks like.\\n\\nAre ensembles covered within the scope of this theory? They seem to have good performance but since each member is trained individually there is no smoothing of the training function, although the test loss function is smoother when all member models are combined.\\n\\n\\n***Post rebuttal review***\\n\\nHaving read the rebuttal, I am very happy with the author responses. My main concerns about the Gibb's phenomenon and the choice to consider blueshifting at initialization have been thoroughly addressed. It is clear to me that the authors have thought long and hard about the rebuttal and used it to improve their submission. Therefore I maintain that this is still a clear accept.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
DigrnXQNMTe | A generalized probability kernel on discrete distributions and its application in two-sample test | [
"Le Niu"
] | We propose a generalized probability kernel(GPK) on discrete distributions with finite support. This probability kernel, defined as kernel between distributions instead of samples, generalizes the existing discrepancy statistics such as maximum mean discrepancy(MMD) as well as probability product kernels, and extends to more general cases. For both existing and newly proposed statistics, we estimate them through empirical frequency and illustrate the strategy to analyze the resulting bias and convergence bounds. We further propose power-MMD, a natural extension of MMD in the framework of GPK, illustrating its usage for the task of two-sample test. Our work connects the fields of discrete distribution-property estimation and kernel-based hypothesis test, which might shed light on more new possibilities. | [
"maximum mean discrepancy",
"RKHS",
"two-sample test",
"empirical estimator",
"discrete distributions"
] | Reject | https://openreview.net/pdf?id=DigrnXQNMTe | https://openreview.net/forum?id=DigrnXQNMTe | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"e347jbzsv91",
"vTnSiiBE2FK",
"H7gtCeOJAG",
"3ZjWR7fMMcK",
"CaKVh9U2Jk-",
"s2goYRaVjQR",
"51mRl3J2PK",
"SZK27KWfx2Z",
"6KYySP4YZT5",
"xn8vr0P4wAI"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040402471,
1605540940908,
1605538816501,
1605537310866,
1605536718515,
1605535394627,
1603883265764,
1603817904266,
1603705583407,
1603630765000
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3560/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3560/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3560/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3560/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3560/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3560/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3560/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3560/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3560/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"The focus of the submission is to define divergences on discrete probability measures. Particularly, the authors propose a common generalization of the well-known concept of maximum mean discrepancy and kernel Stein discrepancy.\", \"as_summarized_by_the_reviewers_the_submission_is_in_a_rather_preliminary_form\": \"1)The work lacks motivation.\\n2)Literature review (there are 4 references in total) and numerical illustrations are missing.\\n3)The submission lacks proper mathematical formulation/rigor.\\nI highly recommend the authors to not submit similar draft manuscripts in the future.\"}",
"{\"title\": \"we have revised our notation, with more precise definitions.\", \"comment\": \"Thanks for your reviewing. We have totally remove the part of KSD as we find our proof of theorem 7. in the original version is not correct. In this revision, we mainly focus on polynomial GPK and bring some new results. From your comment, we noticed that in our first submission, we use K to represents kernel between distributions, kernel between values, and gram matrix. This is confusing, and in this revision we use different terms to denote them.\\n\\nIn response to your comments. \\n1. \\\"in Definition 1, you defined a kernel, on distributions p and q, that is a k x k matrix; while in definition 2, the notion of K, are on samples and is a scalar output.\\\" we define our probability kernel based on a gram matrix, and this gram matrix is produced by a continuous reproducing kernel function as in MMD cases.\\n\\n\\n2. \\\"it is unclear of how \\\\phi is defined in general; only examples are given later for specific cases so that we got an conjuncture\\\", yes in GPK framework, any element-wise mapping function is valid a function, see Definition 2 in this revision. Our idea is firstly define a framework with little constrains which could generalize a lot of cases. And we then narrow down to specific subsets equipped with some interesting properties. see section 4 of this revision for some examples\\n\\n\\n3. \\\"why is MMD_E^2 an unbiased estimator?\\\" We find out the plugin-estimator we proposed for GPK framework actually generalize the case of linear time statistics of MMD($MMD_l^2$), and the details of the derivation are in section 6.1 and theorem 2. Also there is a equivalent between the convergence bound of $MMD_l^2$ and our plugin-estimator, see remark3.1 \\n\\n\\n4. \\\"we need to know p and q to define k_{prob}; how is this going to be applied to two-sample test?\\\" We use plugin-estimators to estimate the GPK[p,q], and once it is a unbiased estimator, no matter what kind of p and q we have, given enough samples, we can always get accuracy estimate of GPK[p,q].(thus we don't need to know p and q beforehand). Furthermore, if GPK[p,q]=0 if and only if p=q, the convergence bound of the estimators could be used to provide acceptance region of null hypothesis p=q, see Corollary 3.3.\"}",
"{\"title\": \"we have revised the notation we used, and corrected the typos, hoping this version looks better\", \"comment\": \"Thanks a lot for your reviewing. We agree our first version is full of typos, and we are sorry for this. In this revision, we have updated the notation section, which provides more precise definitions, and we also checked the typos.\\nSince you mentioned citep and citet, we assume ICLR recommend to always use citep instead of citet, is that correct? \\nOn the theoretical side, yes in the case of categorical distribution, values of discrete variables does not relate to any notion of distance. However, many natural processes will produce discrete distributions where there\\npossibly exists a similarity measure in values which imply the similarity in frequencies of occurrence(\\nprobability values). A good example is the word2vec techniques in NLP tasks. Although the words in a vocabulary is apparently a case of categorical distribution, given enough training data, similarity measure\\nbetween words could be made which implies their similarity in probability values. This is also exactly the case of MMD in discrete setting. And our works is a direct extension of MMD. We introduced this discussion in section 4.2\\n\\nFor your comment related to $l=k$ cases of polynomial GPK, we summarize this result as power-MMD(see section 6). Note that we have also slightly modified our definition of polynomial GPK which generalizes more cases.\"}",
"{\"title\": \"we have new results with convergence bound, and based on the GPK framework, we propose a new statistics for two-sample test with better performance than linear time MMD\", \"comment\": \"Thank you for your reviewing. In this revision, we present the usage of our GPK framework in proposing new statistics for two-sample test, which we call power-MMD. power-MMD is a direct extension of MMD in the framework of GPK. We provide unbiased estimator and convergence bounds of it.\\n\\nAs to KSD, since we found that our proof in theorem 7.(related with KSD) in our first version of paper is not correct, we do not have any new result related with KSD anymore. Thus we totally remove the discussion of KSD in this first revision. \\n\\nWe also revise the presentation, including the typos and math notations, hoping this version would look better.\"}",
"{\"title\": \"we have revised the submission, with some valuable new results\", \"comment\": \"Thank you for reviewing our submission. We admit that our first version of paper has a lot of problem, hoping this revision would be better. In response to the comments:\\n1. a. we call it a kernel because it is a more general definition, which include the cases where GPK[p,q] increase when p and q become similar. Since the mapping function $\\\\phi$ allows a great number of possibilities, we do not know if every case satisfies the requirement of distance measure.\\n1. b. d. we have modified the definition, and we think this time it will be clear\\n1. c. we assume the values of the discrete distribution Y are in d dimensional space R^d\\n2.3.4.5 we modified our theory and have new results related with polynomial GPK\\n7. Bernstein polynomial is used to search for unbiased estimators, see theorem 2. in this revision.\"}",
"{\"title\": \"description of first revision\", \"comment\": \"As we found that our proof in theorem 7.(related with KSD) in our first version of paper is not correct, we do not have any new result related with KSD anymore. Thus we totally remove the discussion of KSD in this first revision. We then focus on the discussion of polynomial GPK. We generalize the result of unbiased estimator in our first version into a more general theorem(theorem 2), and provide convergence bound of the unbiased estimator we discovered(theorem 3). We apply the result to a special case of polynomial GPK, which we call it power-MMD. We illustrate that power-MMD could also be used for two-sample test.\\nIn our first submission, we use K to represents kernel between distributions, kernel between values, and gram matrix. This is confusing, and in this revision we use different terms to denote them. We also update the Notation section to have a more accuracy description of the terms we introduced.\"}",
"{\"title\": \"Seems to be an incomplete submission with missing details\", \"review\": \"The works proposes a generalization of MMD-squared distance. However, the submission seems to be an incomplete one.\", \"major_comments\": \"1. Definition 2 seems to be the key definition in the work. However, there are multiple issues:\\n a. It is not clear why it is called a kernel? Should not it be called distance? After all, it generalizes MMD-squared!\\n b. \\\"$K$\\\" seems to be mixed up with \\\"$k$\\\". \\\"$K$\\\" seems to be the gram matrix and not the kernel.\\n c. $k(y_i, y_j)$ is from $Y \\\\times Y\\\\rightarrow R$, and not from $R^n \\\\times R^n \\\\rightarrow R$.\\n d. why should \\\"\\\\phi\\\" belong to RKHS of k? Recall that RKHS of k will contain functions from R^n\\\\rightarrow R.\\n2. I agree with the write-up which states that results in section 4.2 are trivial.\\n3. section 4.3.1 are known results and need to be skipped.\\n4. Proof of theorem 5 seems to be completely missing. How is sup over f removed?\\n5. In Definition 6, what is $f$? Is a sup over f missing? Because of this and the previous issue, section 5.1 seems very incomplete.\\n6. Simulation section seems to be completely missing.\\n7. Connection with Bernstien polynomials highlighted in intro etc. seems to be missing.\", \"rating\": \"1: Trivial or wrong\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Interesting topic, but poorly executed\", \"review\": [\"Summary.\", \"The authors describe a family of kernel functions on discrete probability measures. The kernel generalizes existing discrepancies such as the MMD and the KSD. The authors further provide plugin estimates based on empirical frequencies and some arguments for unbiased-ness.\", \"While it is interesting to think about alternative estimators for comparing probability distributions, this paper falls short on the execution. I would recommend a major work-over before considering a submission again. There are many missing points in theory, experiments (there are none), and presentation. See below.\", \"It is unclear to me why we would care about the proposed estimators\", \"There is no analysis showing that the presented kernels are useful in any way.\", \"I appreciate that the authors show that existing discrepancies are special cases of the proposed one, but I wonder again what that is useful for?\", \"This is in particular as the authors do not provide any sort of asymptotic analysis of the presented estimators. How can we use them for two-sample testing without that? Answering this question is one of the major parts of the kernel two-sample testing literature.\", \"Theory.\", \"The presented theory consists of elementary manipulations that mostly follow existing literature, so there is very little actual innovation. For example of of page 5.\", \"Experimental evaluation\", \"There is *no* experimental evaluation of the proposed estimators.\", \"It would have been interesting to compare the variances as a function of dataset characteristics.\", \"Presentation\", \"There many grammar glitches, spelling mistages, missing articles, etc, to a point that it is hard to follow.\", \"There is no overview of the series of arguments in the later part of the paper.\", \"There is a lot of re-cited derivations from existing papers.\"], \"rating\": \"2: Strong rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"The paper has numerous typos, with approximate English and lacks of rigorousness when introducing mathematical concepts; multiple notations are never defined.\", \"review\": [\"The paper under review proposes to generalize MMD for discrete random variables whose labels take value in $\\\\mathbb{R}^k$. They propose to estimate these generalized probability kernel distance using empirical estimators. Their properties are studied for two particular examples, namely a kernelized Stein discrenpancy and polynomials versions. Consistency and bias of both estimators are studied and bias corrected.\", \"The paper has numerous typos, with approximate English and lacks of rigorousness when introducing mathematical concepts; multiple notations are never defined. On the theoretical side, the setting on which the contribution relies on is quite strange: in general, labels of discrete variables does not relate to any notion of distance/ordering as in a classical RKHS setting making the relevance of the methodology quite questionable. These point with other technical issues are summarized in following remarks:\", \"## Major points\", \"As mentioned above, it is quite rare in a discrete setting that the labels lies in $\\\\mathbb{R}$ and satisfies a notion of distance. The authors should better motivate this setting by giving at least on relevant example, either theoretical or practical, where such structure is relevant.\", \"What does a Stein operator in a discrete setting means? There is a diffenretial operator in Definition 5 that is difficult to generalize and apply in a discrete context.\", \"The symmetric KDSD introduced in Definition 6 is claimed to be a probability kernel, but the proof that is satisfies Definition 2 is not given.\", \"The so-called polynomial probability kernel seems to obviously require $l=k$ to satisfy conditions of Definition 3, i.e., that $|\\\\phi(q,p)\\\\| = 0$ implies that $p=q$. It can be called a probability kernel only in such condition.\", \"\\\\end{enumerate}\", \"## Minor points\", \"The paper has numerous typos and imprecisions; a subset of them are listed here.\", \"p.1: 'underline' should be 'underlying'.\", \"p.1 when using 'i.e.', always write ',e.i.,'.\", \"p.1 and onward: there is always a space missing before each parenthesis.\", \"p.1: Yi & Along (2020) should be a citep and not citet.\", \"p.1: 'remain futher study' should be for instance 'is left for future work'.\", \"p.1: KSD is not defined yet.\", \"p.2: 'the introducting' is `the introduction'.\", \"p.2 'in representing; is not right.\", \"p.2 Is $[k]$ the sample space? If yes what is $\\\\{x_1,\\\\dots,x_n\\\\}$? A sample? What is the probability measure $v_i$? Do you mean the probability that $X$ falls in $v_i$?\", \"p.2 Definition 1: 'Given that distributionS p and q belong ... distributionS with...'. Also 'map' is singular. What is the 'function space' that you refer to? Also where does this definition comes from? Please give proper referencing.\", \"p.2: Why is there a line break right at the start of 4.1?\", \"p.2: what is an 'instance of integral probability metric'?\", \"p.2: last equation $\\\\mu_p$ is not defined, the product opertor $<.,.>_{\\\\mathcal{H}}$ is not defined. $\\\\mathcal{H}$ is not defined.\", \"p.3: what these 'embedding functions'?\", \"p.3 the RBH kernel is not defined.\", \"p.3 second equation: what is $\\\\phi$?\", \"p.3 Definition 2: the index the sample space should be k, i.e., $y_1,\\\\dots,y_k$ if it refers to the distributions and $n$ for a sample. Here it should be $k$ as it is written distribution.\", \"p.3: 'examINE', 'members'.\", \"p.4: the 'brief' proof provided here is only working for discrete variables, while the proof in Gretton et. al deals with continuous variables.\", \"p.4: what is the 'term above'?\", \"p.5 'illustrate'\", \"p.5: there should not be such a thing as an 'art' in science. If you raise that question, then you should formally discuss this topic (choice of optimal $\\\\phi$).\", \"p.5 Second equation: what are $x_s$ and $x_t$? Notations between this equation and the next are not consistent ($n$ is paired with $x$ and then with $y$ in the next equation).\", \"p.6: what is this so-called 'same property'?\", \"p.6: pmfs is never defined.\", \"p.6 Definition 5: notation $\\\\mathcal{A}$ is never used. $s_p$ is not defined. $\\\\Delta^*$ is not defined. If the latter is a differential operator, what does it means in the context of discrete random variables?\", \"p.6: what is 'form 5'?\", \"p.7: what forms 5 and 6?\", \"p.7: Theorem 5: operator $L$ is not defined. A dot is missing. Are $p$ and$q$ density functions of pmfs?\", \"p.7: 'preliminary results'.\", \"p.7: what does 'justing' mean?\", \"p.8: what is requirement $2$?\"], \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"new kernels incorporating probability notion are proposed to perform two-sample test; but not yet clearly defined/explained.\", \"review\": \"This paper tries to propose a kernel-based discrepancy measure called generalised probability kernel that can unify MMD and KSD which is an interesting topic of discussion. The paper applies the new discrepancy to perform two-sample tests.\\nThe new kernel proposed, unlike the previous RKHS kernels that only depend on data-points, incorporate the notion of probability. e.g. kernel K_{p,q} depends on density p and q. also a symmetric version on discrete KSD is discussed.\\nDespite the idea is interesting, there are several flaws which can be reviewed.\\n\\nFirstly, I think the paper is not clearly presented, with some confusing notation.\\n--in Definition 1, you defined a kernel, on distributions p and q, that is a k x k matrix; while in definition 2, the notion of K, are on samples and is a scalar output.\\nit is unclear of how \\\\phi is defined in general; only examples are given later for specific cases so that we got an conjuncture.\\n--in Definition 5, why is it different from stein operator of KDSD? or it is supposed to say difference operator?\", \"in_addition_i_have_several_confusions\": \"1. why is MMD_E^2 an unbiased estimator? what happened to k(x_i, x_i)? it is not clear from the Bernstein polynomial introduced in appendix. \\n2. in abstract, it claims that the kernels are between distributions instead of samples, but in the main text it is still evaluation at p_i=p(x_i) on samples; I m confused of the difference and novelty claimed.\\n3. The above concern brings up the question while applying on two sample test. \\n--When the MMD is used to perform two-sample test, it is assumed that both p and q are unknown. however, to my understanding, we need to know p and q to define k_{prob}; how is this going to be applied to two-sample test? \\n--for KSD setting, when the symmetric KDSD is introduced, it also seems to require p and q to known for two-sample testing. In the Liu2016 setting, where goodness-of-fit test is proposed with KSD, q is known (up to normalization) while p is unknown with samples; that is a key point why KSD is useful for goodness-of-fit test.\\nIn addition, is there any argument on why the symmetric-KDSD might be better than KDSD Yang et.al 2018?\\n\\nAn additional point is regarding literature review, which is yet throughout to check; e.g. as\\nChwialkowski, et. al \\\"A kernel test of goodness of fit.\\\" proposed independently as Liu et.al for KSD goodness-of-fit test, that might be useful to cite.\\n\\nIn my point of view, ICLR may not be a venue of fit either. More reviews and clarifications may be required, for both kernel construction and application.\", \"rating\": \"2: Strong rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
VyDYSMx1sFU | End-to-End on-device Federated Learning: A case study | [
"Hongyi Zhang",
"Jan Bosch",
"Helena Holmström Olsson"
] | With the development of computation capability in devices, companies are eager to utilize ML/DL methods to improve their service quality. However, with traditional Machine Learning approaches, companies need to build up a powerful data center to collect data and perform centralized model training, which turns out to be expensive and inefficient. Federated Learning has been introduced to solve this challenge. Because of its characteristics such as model-only exchange and parallel training, the technique can not only preserve user data privacy but also accelerate model training speed. In this paper, we introduce an approach to end-to-end on-device Machine Learning by utilizing Federated Learning. We validate our approach with an important industrial use case, the wheel steering angle prediction in the field of autonomous driving. Our results show that Federated Learning can significantly improve the quality of local edge models and reach the same accuracy level as compared to the traditional centralized Machine Learning approach without its negative effects. Furthermore, Federated Learning can accelerate model training speed and reduce the communication overhead, which proves that this approach has great strength when deploying ML/DL components to real-world embedded systems. | [
"Federated Learning",
"Machine Learning",
"End-to-End Learning",
"Artificial Intelligence"
] | Reject | https://openreview.net/pdf?id=VyDYSMx1sFU | https://openreview.net/forum?id=VyDYSMx1sFU | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"NFVC7kgzNHe",
"Icsyeh_VTOk",
"PUTR6RM85_v",
"8pP3s7gp_yX",
"kt5JZTV0Z86"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040436571,
1603944323377,
1603862860777,
1603665930909,
1603665550729
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3559/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3559/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3559/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3559/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This paper proposes the use of federated learning to the application of steering wheel prediction for autonomous driving. While the application is new and interesting, the reviewers felt that the approach and results were mostly empirical. I suggest that the authors improve the conceptual/algorithmic contribution of the paper in a revised draft. Another suggestion is to include a better explanation of hyper-parameter optimization used in the experiments. I hope that the reviewers' constructive comments will help the authors revise the draft adequately for submission to a future venue!\"}",
"{\"title\": \"maybe consider other more application venues?\", \"review\": [\"This paper applies federated learning to steering wheel prediction for autonomous driving. \\\"Federated learning\\\" in this draft mainly refers to an on-device distributed training algorithm where each edge device hosts its private data and performs local updates (model training) and send the updates back to a central server to aggregate. More specifically, this paper uses the most well-known algorithm in federated learning, FedAvg (McMahan et al. 2017).\", \"Pros\", \"The application is real and seems important.\", \"Distributed/federated learning makes sense for this application.\", \"Cons\", \"The main contributions of the draft are not clear. It looks to me such empirical studies of a well-known algorithm on a specific application will better fit a more application-oriented or system-oriented venue, e.g., CVPR, SysML.\", \"How are the hyperparameters tuned for centralized and federated setting?\", \"What are the hardwares on edge devices/vehicles, and what are the hardwares in datacenter for centralized training? The draft mentioned Tesla T4 GPUs, but it seems not clear exactly how much computation power has been used.\", \"Could the authors clarify \\\"companies need to build up a powerful data center to collect data and perform centralized model training, which turns out to be expensive and inefficient. Federated Learning has been introduced to solve this challenge.\\\"? As far as I know, the primary motivation for federated learning is privacy protection. Edge devices has far less computation power and big communication barrier, why would it solve \\\"this challenge\\\"?\", \"The following sentences seem to against the anonymous rules? \\\"Our previous research shows the challenges of deploying AI/ML components into a real-world industrial context. As we defined in \\u201dEngineering AI Systems: A Research Agenda\\u201d (Bosch et al., 2020), AI engineering refers to AI/ML-driven software development and deployment in production contexts. We found that the transition from prototype to the production-quality deployment of ML models proves to be challenging for many companies (L\\u2019heureux et al., 2017b) (Lwakatare et al., 2019).\\\"\"], \"some_minor_improvement\": \"The abbreviation \\u201cML/DL\\u201d seems never introduced\\nIt seems unnecessary to capitalized \\u201cMachine Learning\\u201d, \\u201cFederated Learning\\u201d. \\nConsider cite the original FedAvg (McMahan et al. 2017) paper instead of (Li et al., 2019).\\n\\n====== post rebuttal ======\\n\\nI do not think the response addressed my concerns. I would strongly suggest authors reconsider the design choices where I raised questions. Note that these are not only clarification questions, but also fundamentals of machine learning and federated learning.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"An implementation of federated learning on a use case in autonomous driving. *The paper is not properly anonymized*\", \"review\": \"******************************************************************************\\n\\nThe paper is not properly anonymized. The intro refers to \\u201cOur previous research\\u201d and says \\u201cAs we defined in Engineering AI Systems: A Research Agenda, (Bosch et al., 2020), \\u2026 .\\u201d As such it violates the anonymity policy. \\n******************************************************************************\\n\\nThis paper describes end-to-end implementation of Federated Learning (FL) on a use case of steering wheel prediction in autonomous driving. It provides empirical evaluation on real-world autonomous driving datasets and shows improved performance compared to centralized learning methods.\", \"pros\": \"Is it interesting to see an implementation of FL on a real-world use-case. The paper also does well in comparing different factors such as training time and bandwidth cost for FL and centralized training.\", \"cons\": \"The paper doesn\\u2019t have enough technical depth to be accepted at ICLR and reads more like a report than a paper. It mainly describes the implementation of FL for a real-world application, which, although important, does not contribute to the field in terms of developing better algorithms or better understanding the current ones. \\n\\nA large part of the experiment section describes the hardware features, network structure and training method in great details, which seems redundant or unnecessary for an ICLR submission. For example, section 4 reads \\u201cThe weights of the CNN are adjusted using back propagation to enforce the model output as close as possible to the desired output.\\u201d, which is obvious to most readers. \\n\\nThere are also some statements in paper that are not quite scientific or concrete. For example, the intro reads \\u201cdue to the characteristics of Federated Learning, on-device training becomes possible.\\u201d This is not true as on-device training is not becoming possible due to FL, though FL certainly requires it.\", \"rating\": \"2: Strong rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Nice case study for Federated Learning on autonomous driving application but no actual research proposed.\", \"review\": \"This paper presents a case study that applies Federated Learning for steering angle prediction in self-driving cars. All methods used have been previously proposed in the literature.\", \"pros\": [\"A case study for an industrial use of federated learning (in autonomous driving application).\", \"Results do show that Federated Learning can give accuracy close to a centralized model for this application but without having to send data to the server (thus saving training time and communication bandwidth requirements).\"], \"cons\": \"- No actual research contribution since nothing new is proposed in this paper.\\n- While the training time and communication bandwidth savings are a good validation, this is not surprising since Federated Learning has been shown to have this benefit for many applications. \\n\\n========== UPDATE AFTER REBUTTAL ===========\\nI have read the author's response. While the case study for industrial applications is important, it would probably be much more impactful if the same study was done on a much larger/realistic scale. For instance, right now it appears that each edge vehicle gets an already available dataset for federated learning, which may have been cleaned and preprocessed properly. For claiming a real industrial deployment/importance, it would have been great if the study was conducted with vehicles receiving real-time data from real vehicles which is prone to be extremely noisy (although the reviewer is not sure if this would be possible for regulatory reasons (e.g., if such learning experiments would be safe enough on real autonomous vehicles as these applications are safety-critical)). Currently, the paper neither has significant enough contributions from novelty side, nor from industrial deployment angle. Hence, as such, the paper cannot be accepted. Perhaps more application-oriented conferences maybe more suitable for this kind of work.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Evaluation of on device federated learning for steering wheel angle prediction\", \"review\": \"The study evaluates federated learning (FL) in the context of steering wheel angle prediction, which is relevant for autonomous driving systems. Authors compare against two baselines a centrally-computed and locally-computed models and measure prediction error, training time and bandwidth cost. The work evaluates an existing approach and therefore its novelty and impact is limited. It does provide an interesting evaluation of FL for a relevant use case. Federated learning, as the authors indicate in the manuscript, is a promising approach for training ML applications while preserving user privacy, which is key to many industrial ML applications such as voice assistants and computer vision algorithms. For that reason, the impact of the paper is significant despite not being very original. The authors carry out a very simple study, but which seems sufficient to demonstrate that FL can have computational advantages, namely reduced training times and bandwidth costs. A challenging application of FL are ML applications that run on small devices that people carry around all the time, such as mobile phones and wearable devices. In that scenario, there is the additional constraint that resources for training models on device are typically limited, the smaller the device the more limited. An interesting extension of this study would be evaluate amount of computational resources used on the device as an additional evaluation metric. It would be great if the authors could add this metric to the present paper, but it could also be something for a followup publication, in other words, I do not think is needed for this paper to be published.\\n\\n[Update after author's rebuttal]\\nI do not see any reason to modify my rating. I also identified the self-citation , but it did not affect my rating or evaluation of the paper.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
QB7FkNVAfxa | On the Explicit Role of Initialization on the Convergence and Generalization Properties of Overparametrized Linear Networks | [
"Hancheng Min",
"Salma Tarmoun",
"Rene Vidal",
"Enrique Mallada"
] | Neural networks trained via gradient descent with random initialization and without any regularization enjoy good generalization performance in practice despite being highly overparametrized. A promising direction to explain this phenomenon is the \emph{Neural Tangent Kernel} (NTK), which characterizes the implicit regularization effect of gradient flow/descent on infinitely wide neural networks with random initialization. However, a non-asymptotic analysis that connects generalization performance, initialization, and optimization for finite width networks remains elusive. In this paper, we present a novel analysis of overparametrized single-hidden layer linear networks, which formally connects initialization, optimization, and overparametrization with generalization performance. We exploit the fact that gradient flow preserves a certain matrix that characterizes the \emph{imbalance} of the network weights, to show that the squared loss converges exponentially at a rate that depends on the level of imbalance of the initialization. Such guarantees on the convergence rate allow us to show that large hidden layer width, together with (properly scaled) random initialization, implicitly constrains the dynamics of the network parameters to be close to a low-dimensional manifold. In turn, minimizing the loss over this manifold leads to solutions with good generalization, which correspond to the min-norm solution in the linear case. Finally, we derive a novel $\mathcal{O}( h^{-1/2})$ upper-bound on the operator norm distance between the trained network and the min-norm solution, where $h$ is the hidden layer width. | [
"initialization",
"random initialization",
"explicit role",
"convergence",
"generalization properties",
"generalization performance",
"optimization",
"imbalance",
"manifold",
"solution"
] | Reject | https://openreview.net/pdf?id=QB7FkNVAfxa | https://openreview.net/forum?id=QB7FkNVAfxa | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"SonXgBfOmfL",
"oNV9B6-aseS",
"LyVbn7-fAd",
"2n9gZpZeRwt",
"gWg7rTnG3KF",
"aKEiI83kB_j",
"G4Sfk3Vvbgo",
"1WATnh7UIOg",
"GtYYaSorf9",
"d-J1FoJHzVf"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040357853,
1606302100217,
1606301863628,
1606301648019,
1606301523059,
1606301230668,
1603927782070,
1603867348563,
1603796734111,
1603669244619
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3558/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3558/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3558/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3558/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3558/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3558/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3558/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3558/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3558/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"The authors provide a new analysis of learning of two-layer linear networks with gradient flow, leading to some novel optimization and generalization guarantees incorporating a notion of the imbalance in the weights. While there was some diversity of opinion, the prevailing view was that the results were not sufficiently significant for publication in ICLR.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"Thank you for your comments on our paper and suggestions. Below are our response to your comments:\\n\\n1) **Imbalance matrix and its singular values**:\\n\\nWe greatly appreciate your suggestions and we have added the proof right after we state that the imbalance matrix is time-invariant in Section 3.2.\\n\\nRegarding the singular values of the imbalance matrix, we have added the result in Appendix F. We show that when the entries are initialized with $\\\\mathcal N(0,h^{-1})$ (setting $\\\\alpha=1/2$ in Claim F.1) all non-zero singular values of the imbalance matrix concentrate to 1 as $h$ increases.\\n\\n2) **Property of min-norm solution and generalization bound**:\\n\\nThank you for your comment. In our paper, we focus on the implicit regularization of overparametrized networks as the generalization property of our interest. We are interested in how random initialization and overparametrization leads to a regularized solution, close to the minimum norm, rather than asking how good such a regularized solution is. Making statements on the generalization properties of the minimum norm solution $\\\\hat{\\\\Theta}$ further requires making additional assumptions on the data, that go beyond qualitative conditions such as rank. \\nWe thus think that a sketch of other papers on the property of min-norm solution $\\\\hat{\\\\Theta}$, which will require an additional set of assumptions, may not be beneficial to the readers. \\n\\nFor the same reason, our analysis on implicit regularization does not directly suggest a generalization bound. But we also agree that it would be interesting to derive generalization bounds for wide networks given additional assumptions on the data.\\n\\n3) **Extension to nonlinear networks**: \\n\\nIndeed, understanding the convergence and generalization properties of networks with nonlinear activation will be of most practical value. However, the case of linear networks has not been fully understood yet. For example, the fact that the imbalance contributes to the exponential convergence has not been shown previously. We believe that we need a deep understanding of simple models, that allow for tighter analyses and counterfactual thinking, in order to thoroughly understand more general cases, with non-linear activations, that are used in deep learning. \\n\\nReviewer 1 also asked about the challenges in extending our analysis to ReLU networks. We reiterate next for convenience. \\nFor the convergence analysis, if the activation is a ReLU, the diagonal terms of the imbalance matrix will be preserved (Du et al. 2018) under differential inclusion. There is still invariance in the imbalance matrix but it is unclear how it contributes to the convergence of the learning dynamics. For the generalization property/implicit regularization, we believe the key challenge would be to identifying the good manifold in terms of generalization, given certain data distribution assumptions.\\n\\n4) **Comparison with other literature**:\\n\\nThank you for the comment. We carefully read several of Rong's papers on matrix/tensor factorization which we think could be related. But we could not see a direct relationship that merited a detailed comparison. Those papers have settings that are substantially different from ours, for example, they either do not consider a growing overparametrization via width $h$, or they consider different training procedures than gradient flow/descent, thus we don't think a meaningful comparison is possible.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"Thank you for your comments on our paper. We are glad you find our results interesting and novel. Below are our responses to your comments:\\n\\n1) **Dependence on the input dimension**:\\n\\nSince we use the basic random matrix theory to prove our concentration result, the input dimension naturally arises as we study the singular values of matrix $\\\\begin{bmatrix}V^T &U^T\\\\end{bmatrix}$. It would be interesting to see whether such dependence is necessary, and we think the answer will be clear once we understand the concentration of the imbalance matrix and other matrices of interest in our analysis, such as $U_1U_2^T,VU_2^T$. \\n\\nAs for the comparison with previous works, we have not seen previously a non-asymptotic bound between the wide linear network to the min-norm solution, to our best knowledge. Therefore there might not be a direct comparison. \\n\\n2) **Noisy Gradient Descent**:\\n\\nThank you for your question. We believe a similar analysis would work for gradient descent and stochastic gradient descent will sufficiently small step size. While there are additional challenges when moving to discrete-time analysis, we are working in that direction. However, our intuition in this regard suggests that noise will require a faster rate of convergence to counteract/limit the drifting that would move the trajectories away from the \\\"good\\\" manifold.\\n\\n3) **Extension to nonlinear networks**:\\n\\nFor the convergence analysis, if the activation is a ReLU, the diagonal terms of the imbalance matrix will be preserved (Du et al. 2018) under differential inclusion. There is still invariance in the imbalance matrix but it is unclear how it contributes to the convergence of the learning dynamics.\\n\\nFor the generalization property/implicit regularization, we believe the key challenge would be to identifying the good manifold in terms of generalization, given certain data distribution assumptions.\\n\\n\\n**References**:\\n\\nSimon S Du, Wei Hu, and Jason D Lee. \\\"Algorithmic regularization in learning deep homogeneous models: Layers are automatically balanced\\\". In Advances in Neural Information Processing Systems, 2018.\"}",
"{\"title\": \"Response to Reviewer 3 (Part 2)\", \"comment\": \"3) **Comparison with NTK and the scale of initialization**:\\n\\nThis comment has led to new results and a deeper understanding. We greatly thank the reviewer for this. \\n\\nWe believe there is nothing wrong with being unconventional. Our initialization is indeed different from the one used in NTK, but it is arguably better as it achieves the same limiting end-to-end function, but at a faster convergence rate, as our analysis shows. To clarify this issue, we added a detailed comparison of our problem setting with previous works on NTK analysis (See Appendix E.). In particular, we show that one can relate our model assumptions to the NTK ones by rescaling the parameters and time. This time scaling leads to a slower convergence rate. We also note that our result does not rely on studying the tangent kernel of the network, hence there is a significant difference between our approach and the NTK one.\\n\\nMoreover, we now prove thm 2 in a more general setting where the variance for the entries of $U,V$ is $h^{-2\\\\alpha}$, where $1/4<\\\\alpha\\\\leq 1/2$. The case $\\\\alpha=1/2$, i.e. the variance is $h^{-1}$, is a particular case we consider in the main paper. Finally, we note that our analysis can not make the variance smaller, i.e. $\\\\alpha>1/2$, because the imbalance singular value is vanishingly small as $h$ increases. Please see Appendix F. for the general result for different initialization scale.\\n\\n4) **Wrong claim in the contribution**:\\n\\nWe apologize for our misleading phrase. What we intended to say is \\\"To the best of our knowledge, this is the first non-asymptotic bound regarding the generalization property of wide linear networks under random initialization in the global sense.\\\" We have modified the text accordingly. Regarding the non-asymptotic bound from NTK papers, we believe we properly cited related papers (Arora et al., 2019b; Buchanan et al., 2020). \\n\\n**References**:\\n\\nSanjeev Arora, Nadav Cohen, Noah Golowich, and Wei Hu. \\\"A convergence analysis of gradient descent for deep linear neural networks\\\". In International Conference on Learning Representations, 2018a.\\n\\nSanjeev Arora, Simon S Du, Wei Hu, Zhiyuan Li, Russ R Salakhutdinov, and Ruosong Wang.\\n\\\"On exact computation with an infinitely wide neural net\\\". In Advances in Neural Information\\nProcessing Systems, pp. 8141\\u20138150, 2019b.\\n\\nSam Buchanan, Dar Gilboa, and John Wright. \\\"Deep networks and the multiple manifold problem\\\".\", \"arxiv_preprint_arxiv\": \"2008.11245, 2020.\"}",
"{\"title\": \"Response to Reviewer 3 (Part 1)\", \"comment\": \"Thank you for your comments on our paper. Below are our response to your comments:\\n\\n0) **Significance of our results**:\\n\\nWe respectfully disagree with the reviewer's comment about the significance of our work and we have provided a detailed response in the general comments. That being said, we have taken the reviewer's comment very seriously and modified the paper to better articulate the significance of our results. In addition, the reviewer's comments inspired extensions to our analysis that are now also included in appendices E and F. We thank the reviewer for this. Also, we are sorry the reviewer feels the interpretation of our results is misleading. We have thoroughly looked at our paper to identify statements that may have been construed as misleading, and we edited the paper accordingly. We hope the reviewer finds our modified statements concise and accurate.\\n\\n1) **Regarding the Convergence Result**:\\n\\nBoth our results and Arora's are sufficient conditions that are valid in complementary regimes. Therefore, one should not expect our results to capture Arora's and vice versa. Specifically, Our result is not showing that $e^{-ct}$ is a tight characterization of the convergence rate of overparametrized linear networks, we never stated this, and we don't think it is true. Rather, we show that the imbalance is a factor that contributes to the exponential convergence of such networks. In the paper, we do not suggest one should artificially make $c$ large for fast convergence, but rather, we show that random initialization together with overparametrization naturally satisfies the rank condition of the imbalance. And this is sufficient for us to further guarantee that the gradient flow stays close to the good generalization manifold.\\n\\nPrevious works (Arora et al. 2018a etc.) have shown very insightful results on the convergence rate when the imbalance is zero. We see our result as a complement to those works because we consider the case where the imbalance is non-zero. Arguably, as most bounds, including those obtained in the related literature, while sufficient, it is not necessary for exponential convergence. We point out, however, that most existing conditions for linear-networks, that are based on requiring an approximately balanced initialization are not satisfied with high probability under random initialization without having the variance of all the entries sufficiently small, which may lead to a poor rate of convergence. We have modified the comments after Theorem 1 to better reflect on our response above.\\n\\nWe also believe similar results can be derived for gradient descent with a sufficiently small step size (In this case the imbalance is not invariant but we should be able to bound its changes), and we are working in that direction.\\n\\n2) **Invariant manifold in the overparametrized setting**:\\n\\nWe respectfully disagree with the reviewer's comment questioning the necessity of studying the invariant manifold in the overparametrized setting. As we stated in the global response, fully understanding the simple overparametrized model could shed light on how to analyze more complex models. \\nWe certainly agree that for the standard linear regression the \\\"good\\\" manifold reduces to the span of the data points. However, unlike the linear regression case, there is not a clear data-agnostic way to initialize so as to guarantee fast convergence and proximity to the manifold in the overparamterized setting. This is because the zero initialization is in fact a stationary (saddle) point of the gradient flow. As a result, any initialization with small $||U||$ and $||V||$, will be slow to converge, even if it is within the manifold of interest. Our analysis explicitly provides the sufficient condition $V(0)U_2^T(0)=0,U_1(0)U_2^T(0)=0$ to ensure the proximity to the manifold during training. Moreover, we show for wide linear networks with the random initialization, this condition is approximately satisfied for the entire trajectory.\"}",
"{\"title\": \"Response to Reviewer 4\", \"comment\": \"Thank you for your comments on our paper. We are glad that you find our result interesting. Below are our response to your comments:\\n\\n1) **On the width requirement $h>n+m-1$**:\\n\\nThe reviewer raises an interesting point that we had discussed in the appendix, where we show that the width requirement can be relaxed depending on the rank of the data matrix X, which consistent with previous literature. Specifically, in Section 2 we show that when the data matrix $X$ has full rank and $n<D$, the width requirement is $h>n+m-1$. However, in appendix B we show that for rank deficient data matrix $X$ with $rank(X)=d\\\\leq n<D$, the width requirement becomes $h>d+m-1$. We apologize for not presenting the result in full generality in the main paper. For the works listed by the reviewer, the width requirement scales up as the rank of data matrix $X$ increases, but we note that there is hardly a direct comparison because of the different settings. We have refined the remarks after Theorem 1 to address your concern.\\n\\n2) **Over-determined linear regression case**\\n\\nCorrect. When the data matrix $X$ satisfies $n>D$, the regression problem is over-determined. Theorem 1 implies exponential convergence to a global minimum of the loss function, whose end-to-end function corresponds to the unique solution of the regression problem. However, this is not the regime we are interested in since, as the reviewer points out, in that case, the analysis is straightforward. \\n\\n3) **Numerical Verification and Conservativeness of our Imbalance Bound**:\\n\\nThank you for your suggestion. We added numerical validations for thm 1 and thm 2 in Appendix A. Indeed, the bound on the convergence rate based on the Imbalance is only an upper bound. The convergence rate does depend on several aspects of the initialization as the existing literature points out. Our imbalance bound is neither better nor worse than the previous bound. Rather it complements existing results by showing a different regime that ensures exponential convergence. Interestingly, this regime is in fact more aligned with standard practices in Deep Learning, as the alternative analysis requires approximate balancedness, which is not satisfied with high probability under random initialization. \\n\\n4) **Extension to multi-linear cases**:\\n\\nWe believe imbalance also plays a role in training multi-layer networks, but rigorously showing so is the subject of future research. In particular, the notion of imbalance for multi-layer networks does exist, but it is unclear under which conditions on the imbalance the exponential convergence is guaranteed. Moreover, similar to the single-hidden-layer case, we would like to find conditions that are naturally satisfied by random initialization along with overparametrization. Even if we had an answer to all these questions, we do not think we could concisely fit them with the existing results.\"}",
"{\"title\": \"Interesting observation and theory, more detailed comparsions and some experiments are needed.\", \"review\": \"This paper proves the convergence rate of gradient flow for training two-layer linear networks. In particular, this paper discusses the connection between initialization, optimization, generalization, and overparameterization. The results show that gradient flow can converge to the global minimum at a rate depending on the level of imbalance of the initialization. Moreover, the authors show that random initialization and overparameterization can implicitly constrain the gradient flow trajectory to converge to a point lying in a low-dimensional manifold, thus guarantees good generalization ability.\\n\\nThis paper is well organized. It is interesting that sufficient imbalance can guarantee global convergence of two-layer linear networks while other papers may require nearly zero initialization or wide enough networks. Besides, my detailed comments are as follows.\\n\\nOne drawback is that this paper still requires that the width be greater than n+m-1 (Theorem 1), while the network width condition proved in some existing works (listed as follows) does not depend on the number of training examples n (although they require random or orthogonal initialization), the authors may need to comment their network width conditions after Theorem 1 (currently the authors only say that \\u201cour results is not limited to extremely wide networks with random initialization\\u2019\\u2019).\\n\\n[1] Du, S. S., & Hu, W., Width provably matters in optimization for deep linear neural networks. arXiv preprint arXiv:1901.08572.\\n\\n[2] Hu, W., Xiao, L., & Pennington, J., Provable Benefit of Orthogonal Initialization in Optimizing Deep Linear Networks. In International Conference on Learning Representations.\\n\\n[3] Zou, D., Long, P. M., & Gu, Q., On the Global Convergence of Training Deep Linear ResNets. In International Conference on Learning Representations.\\n\\nThe authors prove that the limit of gradient flow can be sufficiently close to the minimum-norm solution if the neural network is sufficiently wide. This conclusion is good and of certain importance to understand the optimization path of training linear networks. However, if the data matrix X is of full rank and D<n, the training objective is strongly convex. In this case, there is only one minimum, thus the convergence result Theorem 1 can directly imply the parameter convergence results in Theorem 2. \\n\\nSome experiments may be needed to verify the theory. In particular, theorem 1 only provides an upper bound result, thus cannot fully characterize the effect of the imbalance on the convergence. The authors may try initializations with different imbalances and plot the convergence curves to demonstrate the results in Theorem 1. Additionally, results in Theorem 2 may also need to be verified in experiments.\\n\\nSo far I can only see that the imbalance plays an important role in training two-layer linear networks, can you extend this to multi-layer cases? Will the imbalance at the initialization together with sufficient overparameterization still guarantee the convergence of gradient flow?\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"concerns about significance\", \"review\": \"This paper studies the optimization and generalization properties of a two-layer linear network. The considered setting is over-parameterized linear regression where the input dimension is D, number of samples is n<D, and the target dimension is m. The hidden width is h. The paper has two main results. The first result is exponential convergence of gradient flow to global minimum, where the convergence rate depends on the (m+n-1)-th singular value of an \\\"imbalance\\\" matrix. The second result shows that the solution found is close to the minimum L2 norm solution if certain orthogonality assumption is approximately satisfied at initially; then it was shown that if the width h is sufficiently large, then under a random initialization scheme, the solution found is close to the minimum L2 norm solution with a distance $1/\\\\sqrt{h}$.\", \"pros\": \"The results are not previously known to my knowledge. The proofs appear to be correct as far as I can tell.\", \"cons\": \"My overall concern is the significance of the results. The results, while correct, do not contribute much to our understandings of optimization and generalization in deep learning. The ways in which the authors interpret the results are unsatisfactory or even misleading.\\n\\n1) Thm 1 shows a convergence rate of $e^{-ct}$, where $c$ is the (m+n-1)-th singular value of an imbalance matrix. On the appearance this result seems to suggest that a larger $c$ is beneficial for convergence. However I believe this suggestion is incorrect and can be very misleading. Indeed, previous work (e.g. Arora et al. 2018a) has shown linear convergence under zero imbalance ($c=0$), as cited in the paper, but Thm 1 fails to capture that. I think in general this $e^{-ct}$ is a very loose bound that does not capture the real convergence rate (unless the authors can provide convincing evidence that suggests otherwise).\\n\\nThat said, I do think Thm 1 is an interesting theoretical result and the proof is clever. I'm concerned about the practical relevance and the possibly misleading message it sends.\\n\\nAnother weakness is that Thm 1 only considers gradient flow but not gradient descent. \\n\\n2) Thm 2 and its interpretations are unsatisfactory in a number of ways.\\n\\nFirst, we know that just doing a normal linear regression using gradient descent (starting from 0) leads to the minimum L2 norm solution. So now we go through all the trouble in the 2-layer net and finally show we can find a solution that's almost as good as linear regression -- what's the point of doing that?\\nOf course, one may argue that we are studying a toy model in order to better understand deep learning. However, the main message from this result can be also conveyed in linear regression -- as shown in Sec 4.1, the main step is to find an invariant manifold for gradient flow such that the minimizer in that manifold must be the min-norm solution; for linear regression, such manifold also exists, which is just the span of the data points. \\n\\nSecond, the initialization used ($1/h$ variance in both layers) is unconventional. It's different from the standard 1/fan_in initialization or the NTK parameterization. What happens if we use those more standard initializations? And what happens if we make the initialization smaller, e.g. $1/h^2$, or $1/h^{100}$? Would those change the result? The scale of the initialization is very important in this line of work (such as NTK), so this should be addressed clearly.\\n(The authors actually claim that as $h\\\\to\\\\infty$ we would get the NTK solution, following Jacot et al (on page 7). I actually don't think Jacot et al.'s work directly implies this, because this paper uses a different initialization scale.)\\n\\nThird, the authors try to differ this result from all the NTK results, but the theorem is exactly showing that the final solution is close to the NTK solution. Isn't this a bit ironic?\\n\\nFourth, the authors claim \\\"this is the first non-asymptotic bound regarding the generalization of linear networks in the global sense.\\\" Maybe check out these papers:\\nImplicit Bias of Gradient Descent on Linear Convolutional Networks,\\nImplicit Regularization in Matrix Factorization.\\nAlso, many NTK papers also have non-asymptotic bounds. For 2-layer linear networks, one should be able to easily get a bound on the distance of the learned model and the min-norm solution -- might be better than Thm 2.\\n\\n\\n-------- after rebuttal --------\\n\\nThanks to the authors for the response and the updated manuscript. My assessment stays the same, and below are my additional comments.\\n\\n1. About Appendix E\\n\\nThanks for the clarification about the initialization scaling. However, this raises more concern about the significance of the result. In Appendex E, it is shown that the scaling considered in the paper and the NTK scaling lead to the **same** gradient flow dynamics. This suggests that we are actually still in the kernel regime, in contrary to the main motivation and the claims in the paper. (As for the time rescaling issue, it doesn't matter in gradient flow since the difference can be absorbed by rescaling the learning rates.)\\n\\nAppendix E also mentions that several previous papers used a small multiplier $\\\\kappa$ to make the initial network small. The authors claim that this makes the convergence rate slower. I don't think this affects the convergence rate, but it only affects the width requirement (see e.g. [1]). In fact, in stead of using this multiplier, there is another way to make the output zero without changing the NTK and without requiring a larger width, that is to use an anti-symmetric initialization -- see e.g. [2][3][4][5].\\n\\n[1] Arora et al. Fine-Grained Analysis of Optimization and Generalization for Overparameterized Two-Layer Neural Networks\\n\\n[2] Chizat et al. On lazy training in differentiable programming\\n\\n[3] Hu et al. Simple and effective regularization methods for training on noisily labeled data with generalization guarantee\\n\\n[4] Bai and Lee. Beyond linearization: On quadratic and higher-order approximation of wide neural networks\\n\\n[5] Zhang et al. A type of generalization error induced by initialization in deep neural networks\\n\\n2. About the motivating questions\\n\\nThis paper proposes to answer two questions in the introduction. The first question is \\\"Is the kernel regime, which requires impractical bounds on the network width, necessary to achieve good generalization?\\\" First, I don't think this paper answers this question since the considered regime is still basically the same as the kernel regime. Second, even if it does, this question itself is not valid, since there are numerous previous theoretical works that study generalization outside the kernel regime, in more interesting settings, e.g. [6][7][8][9] and many more (none of which are mentioned in the paper).\\n\\n[6] Allen-Zhu and Li. Backward Feature Correction: How Deep Learning Performs Deep Learning\\n\\n[7] Allen-Zhu and Li. What Can ResNet Learn Efficiently, Going Beyond Kernels?\\n\\n[8] Wei et al. Regularization Matters: Generalization and Optimization of Neural Nets v.s. their Induced Kernel\\n\\n[9] Woodworth et al. Kernel and Rich Regimes in Overparametrized Models.\\n\\nThe second main question in the introduction is \\\"Does generalization depends explicitly on acceleration? Or is acceleration required only due to the choosing an initialization outside the good generalization manifold?\\\" I genuinely cannot understand this question.\\n\\n3. In the updated manuscript the authors state \\\"To the best of our knowledge, this is the first non-asymptotic bound regarding the generalization property of wide linear networks under random initialization in the global sense.\\\" This is still false (and insignificant) since the stated result is a direct consequence of previous NTK work.\\n\\n4. I certainly understand that understanding deep learning is very challenging so it's a natural step to start with simple models. However I think this paper in its current form has limited significance and has major issues in how it discusses previous work, main motivations and contributions, etc., for reasons described in the review.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Comments to \\\"On the Explicit Role of Initialization on the Convergence and Generalization Properties of Overparametrized Linear Networks\\\"\", \"review\": \"#### General Comments\\nA proper initialization plays an important role in the success of over-parameterized models such as deep neural networks and high dimensional models. However, the explicit role of initialization in theoretical results of an algorithm has not been stated well to my knowledge. The main task of this paper is to present a novel analysis of overparametrized single-hidden layer linear networks, which formally connects initialization, optimization, and overparametrization with generalization performance. Specially, it is shown that the squared loss converges exponentially at a rate that depends on the level of imbalance of the initialization.\\nWith respect to linear networks, the paper makes the following three main contributions:\\n\\uff081\\uff09 The role of initialization of the gradient flow on the convergence is characterized explicitly. \\n (2) The stationary point of the gradient flow is sufficiently close to the min-norm solution in the linear case.\\n\\uff083\\uff09 Random initialization for large wide linear networks ensures that the dynamics of the network parameters \\n are constrained to a low-dimensional manifold. \\nOverall, this is a written- well paper with significant novelty. The results seem interesting in the deep learning theory literature. \\n#### Specific Comments\\n(1) For Theorem 2, the network width is required to be a polynomial of the input dimension D, which may be loose in some practical network structures. I wonder whether such constrain can be relaxed further? it will be better that some quantitative comparison with those related work is made. \\n(2) When noisy gradient descent is considered, is the current analysis still applicable to the case and similar results can be derived? \\n(3) If an activation function is added such that the hypothesis class is nonlinear, is the adopted analysis still valid? if not, what is the additional challenges?\", \"rating\": \"9: Top 15% of accepted papers, strong accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Good result\", \"review\": \"This paper analyzes the convergence of gradient descent optimizing overparametrized linear nn, and proves a exponential convergence rate. Moreover, the paper proposes the distance of the optimizer to the smallest norm solution, which is justified in other papers such as Montanari, etc. as the generalizable solution. Thus the solution that SGD outputs has good generalization as well.\\nI believe overall the result is good, and the points are stated clearly. I have the following suggestions:\\n\\n1. If the space is allowed in appendix, the algebra proving that (9) is time invariant can be provided, rather than \\\"one can easily check\\\". This intermediate step is critical for the full proof so I'd like to check it. (with this I can raise score to 6)\\n2. A sketch of Montanari paper about the property of $\\\\hat \\\\Theta$ can be discussed in the appendix. \\n3. Regarding the thm, it would be definitely sufficient for the conference if anything can be suggested with RELU activation. In NTK work that's just another kernel so it's easy to extend, but it might be hard here, I'm not sure.\\n4. More literature review. I think Rong Ge has some papers about the landscape of matrix factorization problem so it's great to compare with them in detail, even if in appendix.\\n5. In appendix, it's also great to prove that under a certain initialization, what is the expectation/value of high probability of the imbalance singular value.\\n6. How does amount of data affect generalization bound? I think it's 1/\\\\sqrt{n} in NTK work, any a similar behavior here?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
HNA0kUAFdbv | CANVASEMB: Learning Layout Representation with Large-scale Pre-training for Graphic Design | [
"Yuxi Xie",
"Danqing Huang",
"Jinpeng Wang",
"Chin-Yew Lin"
] | Layout representation, which models visual elements in a canvas and their inter-relations, plays a crucial role in graphic design intelligence.
With a large variety of layout designs and the unique characteristic of layouts that visual elements are defined as a list of categorical (e.g. shape type) and numerical (e.g. position and size) properties, it is challenging to learn a general and compact representation with limited data. Inspired by the recent success of self-supervised pre-training techniques in various natural language processing tasks, in this paper, we propose CanvasEmb (Canvas Embedding), which pre-trains deep representation from unlabeled graphic designs by jointly conditioning on all the context elements in the same canvas, with a multi-dimensional feature encoder and a multi-task learning objective. The pre-trained CanvasEmb model can be fine-tuned with just one additional output layer and with a small size of training data to create models for a wide range of downstream tasks. We verify our approach with presentation slides data. We construct a large-scale dataset with more than one million slides, and propose two novel layout understanding tasks with human labeling sets, namely element role labeling and image captioning. Evaluation results on these two tasks show that our model with fine-tuning achieves state-of-the-art performances. Furthermore, we conduct a deep analysis aiming to understand the modeling mechanism of CanvasEmb, and demonstrate its great potential use on more applications such as layout auto completion and layout retrieval. | [
"Layout Representation",
"Pre-training"
] | Reject | https://openreview.net/pdf?id=HNA0kUAFdbv | https://openreview.net/forum?id=HNA0kUAFdbv | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"Sw68EGL1C0",
"3UGttivn053",
"h1-M3_3iG39",
"jEL3ArrOLx7",
"ooxIeD-GI5o",
"4Ppf_MqM7OK",
"8pHhf6e214"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040513250,
1606100054951,
1606099399865,
1606098452306,
1604009804948,
1603899442844,
1603843802888
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3557/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3557/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3557/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3557/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3557/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3557/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"The paper proposes to learn layout representations for graphic design using transformers with a masking approaching inspired by BERT. The proposed model is pretrained on a large-collection of ~1M slides (the script for crawling the slides will be open-sourced) and evaluated in several downstream tasks.\", \"review_summary\": \"The submission received slightly negative with scores of 4 (R3) and 5 (R2,R4). \\nReviewers found the paper to be well-written and clear, and the problem of layout embedding to be interesting. Reviewers agree that the use of transformers for layout embedding has not been explored in prior work. However, the paper did not have proper citation and comparisons against prior work for layout embedding, and lacked systematic evaluation. Reviewers also would like to know more details about the dataset that was used for pre-training.\", \"pros\": [\"Novel use of transformers for layout embedding (not yet explored in prior work)\", \"Use of large dataset of slides\"], \"cons\": [\"Lacked proper citation and comparisons against prior work for layout embedding\", \"Lacked systematic evaluation\", \"Missing details about the dataset\"], \"reviewer_discussion\": \"During the author response period, the authors responded to the reviews indicating that they will improve the draft based on the feedback, but did not submit a revised draft. As there was no revision to the submission, there was limited discussion with the reviewers keeping with their original scores. All reviewers agrees that the direction is interesting but the current submission should not be accepted.\", \"recommendation\": \"The AC agrees with the reviewers that the current version is not ready for acceptance at ICLR, and it would be exciting to see the improved version. We hope the authors will continue to improve their work based on the reviewer feedback and that they will submit an improved version to an appropriate venue.\"}",
"{\"title\": \"Thanks for your comments.\", \"comment\": \"As most reviewers mentioned, the key issues of this paper are the evaluation with previous works and more formal evaluation methods. We will improve these in the next version.\\n\\n-Evaluation of higher complexity tasks: The current two tasks are motivated by real downstream applications for layout recommendations (one for element classification and one for element relation classification). We might think of more difficult tasks related to layout understanding in the future.\\n\\n\\n-Compare with other lower complexity methods / existing works: yes, we will add extra baselines of existing works for comparison. Some of them contain simple neural architecture, which will further demonstrate if such Transformer and pre-training works better. \\n\\n\\n-Evaluation of layout auto completion and layout retrieval: The two applications are used as extra evaluations for the pre-trained model. We simplify the setting of layout auto completion task as to align with our pre-training tasks. According to your suggestion, we will try to follow the same setting of previous works (predict only the geometry properties and not constrained to only 1 element for prediction). For the layout retrieval evaluation, since there is no existing dataset for evaluation, we just conduct a simple human evaluation. We will design more systematic evaluation in the next version.\"}",
"{\"title\": \"Thanks for your comments.\", \"comment\": \"-Originality.\\nWe do not present the Transformer details with the assumption that Transformer is a general and popular model for most readers. Thanks for the comments, we would add more descriptions related to our motivation to use such model architecture, and also add more details of the model.\\n\\n\\n-Reference.\\nThanks for the correction. For the first paper you mentioned, it should be \\u201cZheng et al., 2019\\u201d, the mistake is made due to our wrong formatting of the latex, we will fix that.\\n\\n\\n-Quality.\\n\\n1.2.3. Difference to related works. We will add more descriptions to differentiate our approach from previous works and refine the \\u201cRelated Work\\u201d Section. As you mentioned, one major difference is that we create a large scale of layout dataset and leverage the power of pre-training for general layout representation. We. The average number of layout elements is 6, and the maximum number can be up to 20.\\n\\n4. Though the semantic box geometries are considered, we argue that font-size and font-type are also important (for example, large font size might indicate the element is a title instead of a body text).\\n\\n5. \\u201cImage Captioning\\u201d task is to group elements in a canvas which belongs to image caption pair and it requires layout understanding. It has its potential use such as layout recommendation, which group image-text pairs as one element for template mapping.\\n\\n6. Since the works you mentioned are related to layout generation, we did not consider them as baselines previously. Yes, as many reviewers have pointed out, we will try to adapt these works as baselines (e.g., use their intermediate outputs).\\n\\n7. As for the Graph Neural Network, it would be another promising direction in the next steps, where we need to define relations in the layout as edges.\\n\\n\\n-Clarity.\\nThanks for the suggestions and corrections, we will rephrase some confused descriptions and refine our writing.\"}",
"{\"title\": \"Thanks for you comments\", \"comment\": \"-Dataset. We will add more dataset details in the next version. To answer your questions, there are average 6 elements in each slide. Slides are parsed using our internal tool (similar to the publicly available python-pptx library). For the role detection task, the dataset is NOT subset of the pretraining dataset. The role labels are separately defined and annotated.-\\n\\n-Evaluation. (1) in addition to type, position and color are also useful properties as in some potential downstream tasks, therefore we evaluate them in the layout auto-completion task. We will refine the evaluation to a more systematic approach. (2) Thanks, we will show more visualization case studies in the supplementary. Also we will define more systematic human evaluation. (3) The two papers you mentioned are related to layout generation, but they cannot be directly applied to our tasks related to layout understanding, that\\u2019s why we didn\\u2019t consider them as baselines. But we will definitely consider to use their intermediate outputs as extra baselines in the next version.\"}",
"{\"title\": \"A decent work on slide layout representation learning but evaluation can be improved\", \"review\": \"This paper applies state-of-the-art transformer-based neural networks to layout representation learning of slides. The most notable contribution of this paper is the construction of large-scale parsed slide layout dataset. This paper proposes to pre-train the network on this large-scale dataset without masked reconstruction strategy and verifies it with several subtasks including element role labeling, image captioning, auto-completion and layout retrieval, with a comparison to a decision-tree based method as baseline.\\n\\n+Most of previous layout learning works only show experimental results on small labeled datasets (a few thousands), partially due to the scarcity nature of layout data. This paper looks at slide layout data and constructs a large-scale (>1m) dataset with parsed element properties. \\n+The chosen network design and training strategies all make sense. \\n\\n-It is pity that this paper didn\\u2019t disclose sufficient details of how the large-scale dataset was constructed and of data statistics, e.g. how many elements in each slide, templates, completeness of properties, etc. How are the properties parsed, fully automatic? Is the role labeling dataset part of the pretraining dataset?\\n-Pretraining. The proposed evaluation tasks all seem to be sub-tasks of pre-training and it doesn\\u2019t look falling into the classic scheme of unsupervised pretraining + supervised fine-tuning. Dataset differentiation is another issue. For example, in the role labeling experiment, is this targeted dataset a subset of the large-scale one? And is the only difference that the training loss? \\n-Evaluation. Evaluating graphic layout can be a hard problem and this paper tried to propose several small tasks as probes into the learned network. However, it will be more convincing to have a systematic design of experiments. First of all, in addition to type properties, how about geometric property and color property prediction? Second, any experiments would benefit from both quantitative and qualitative results. Especially for layout design, visualization is very important. \\n-Layout retrieval is an interesting experiment, but manual scoring seems to be arbitrary. \\n-Baselines. Neural design network by Lee et al. in ECCV2020 and LayoutGAN by Li et al. in ICLR 2019 seem to be good baseline network architectures to compare, although they are trained in different ways.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Different Approach on the non-trivial task of Layout representation, but unsatisfactory evaluation\", \"review\": \"Summary of the work:\\nThis work presents a Transformer-based framework to learn Graphical Layout representation/embedding, taking inspiration from transformer-based models in the area of NLP. To train their model, named CANVASEMB, the paper contributes a dataset of Powerpoint Slides, with more than 1 million slides (the paper promises to make the dataset public). When performing new tasks on graphical layouts, CANVASEMB can be used as a pre-trained model. The paper also demonstrates that CANVASEMB achieves SOTA performance on two downstream tasks, viz., element role labeling, and the so-called \\\"image captioning\\\" in the context of a layout image and its text tag.\", \"originality\": \"The task being attempted (Learning Layout embeddings) is not new. However, the paper makes use of a Transformer-based model for Layout embedding, which is different and was not explored/employed before. However, the idea of employing Transformers is not well motivated; the method section is technically sparse, in the sense that not much of the Transformer model is presented (at least using a Figure, if not via Mathematical formulation). It would help the reader if details on the Transformer model and the Attention scheme were presented to some extent.\", \"significance\": \"Layout Representation and Learning is an import problem and is gaining significance. That said, any method should evaluate thoroughly to demonstrate its merits and advantages over prior works, for potential impact. This paper lacks a thorough evaluation. More on this later.\", \"references\": \"\", \"penultimate_line_in_the_first_paragraph_of_rw_section\": \"It should be \\\"Cao et. al 2019\\\", not \\\"Xinru Zheng and Lau (2019)\\\". The first author naming convention cannot be superseded at the writer's discretion. Only include the second and last author in the reference? Has been cited again after this, in the same way. I will assume that this was an oversight. Needs to be corrected.\\nAlso, the paper is missing the following reference, which, similar to Li 2020b, uses a Tree-based representation of Document Layouts.\\n1) READ: Recursive Autoencoders for Document Layout Generation, CVPR 2020\", \"quality\": \"1) The paper claims that CanvasEmb is the first work to provide pre-trained models for layouts in graphic design. I think the \\\"Content-Aware Generative Modeling of Graphic Design Layouts\\\" from SIGGRAPH 2019 is the first work to do so, although on a much smaller scale (~4K magazine designs as against 1M used in this paper).\\n2) The paper does not contrast how it is different from the above work (Cao et al., SIGGRAPH 2019). What are the key differences? What is special about CANVASEMB, that can not be done by the 2019 work when given such a large dataset? Is there a key technical bottleneck that is overcome in this work? I would first like to know the maximum number of layout elements in a Slide in the entire dataset.\\n3) The \\\"Related Work\\\" section could be better. Currently, there seems to be a lack of some sort of explanation on what differentiates this work from the prior works. In other words, there is a frequent and incoherent jump from reference-to-reference in the context of this paper. The desire for a reader is to understand the difference in contributions of this work as compared to the prior works in the related work section.\\n4) \\\"Content-Related Properties\\\" are already taken into account when the semantic box geometries are considered. So, the text font-size and font-type will not have an additional effect on the layout, but only on the perceptual quality of a content-filled layout. So, I don't think this counts as one of the layout properties when you have the box geometries already accounted for.\\n5) The so-called \\\"image captioning\\\" task is nothing but a binary classification of an image-text tag pair. \\n6) Comparisons against the SIGGRAPH 2019 paper (Cao et al, 2019), Li 2020b, and READ (CVPR 2020) should be presented. Doing such an evaluation on the same datasets as CANVASEMB will demonstrate the strength and weaknesses of the Transformer-based architecture.\\n7) In addition to these evaluations, I would like to see why the SOTA Graph Neural Networks can not be employed on this task when they can capture rich structural properties of a layout and require less computation compared to Transformers. This is a very important and interesting experiment, which should be performed. Moreover, training a Graph Neural Network does not require humungous data. This weakens the motivation of the approach presented in this paper.\", \"clarity\": \"1) The writing is good. As I said earlier, the RW section could (and should) be improved.\\n2) I would suggest presenting an image that contains the components of the Transformer. The reader should not have to refer to the cited papers to know/learn what the Transformer is made up of.\\n3) I think the word \\\"pre-trains\\\" in the paper means to say that \\\"CanvasEmb\\\" can be used as a pre-trained model for other tasks. CanvasEmb, in the due course of its development, is not \\\"pre-trained\\\" to begin with. So, I think the sentence in the abstract, the introduction, and parts of the evaluation involving the word \\\"pre-trains\\\" should be rephrased to remove confusion and for an easier understanding of the paper. Ex: A Transformer is first trained using the Slides crawled from the internet, which we term as CanvasEmb, and can be used as a pre-trained model for other downstream tasks.\\n4) \\\"mechanism\\\" typo in the last paragraph of introduction (it appears twice continuously)\\n5) In Equation 5, the tasks for each loss formulation should be written (in the equation itself)\\n6) I would like to know the maximum number of layout elements in a Slide in the entire dataset. For ex: if there are 3 Text Boxes and 2 Figures in a Slide, then the total number of layout elements is 5. \\nThere has been no mention of this in the entire paper, but this is crucial to understand if there is really a need for Transformer-based architecture.\", \"pros\": \"1) Different, interesting approach to Layout Embedding\\n2) Contributes a huge dataset of Powerpoint Slides (1M) \\n3) Simple and easy writing\", \"cons\": \"1) Some important dataset-related stats are missing (as pointed above). They are essential for understanding the richness of the dataset and to make an informed decision on what kind of deep-learning models should be employed when using such a dataset for different tasks.\\n2) Weak evaluation\\n3) Comparison to Cao et al., SIGGRAPH 2019, Li et al 2020b, and READ (CVPRW 2020) are missing. The last two are tree-based generative models of Document layouts, and should be tested for performance on the same dataset used to train CANVASEMB.\\n4) Comparison to SOTA Graph Neural Networks: Missing explanation for a motivation to use Transformer-based architecture, when GNNs can capture rich structural properties of a layout and require less computation compared to Transformers. This is a very important and interesting experiment, which should be performed. Moreover, training a Graph Neural Network does not require humungous data. This weakens the motivation of the approach presented in this paper.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"BERT applied to graphic design layouts (nice!). Requires more experimenting and more details need to be presented\", \"review\": \"This paper proposes to learn representations for graphic design layouts unsupervised using a Masked Language Model objective as in BERT with additional domain specific input embeddings. It also presents a large dataset of slides along with manual annotations used for their evaluation.\\n\\nThe idea of using BERT like training to pre-train features for graphic design layouts is sound and the proposed approach modifies the input token representation to represent different layout elements -- using positional embeddings for continuous attributes (position, rotation etc.) and learned embeddings for categorical attributes (element type etc.). Since there is no positional embedding for position in the sequence, the ordering in the sequence does not matter for training, which is a huge benefit over training sequential models in this context, where ordering can be arbitrary. \\n\\nI believe the evaluation of the method presented in the paper is lacking and could be improved significantly. Moreover, it would be useful to make it clear if the authors plan to release the collected slides dataset since that would count as a positive contribution in my books. More training details (how many/what GPUs, how much time/epochs, any learning rate tricks, optimizer etc.) would also help with the reproducibility aspect of the paper. \\n\\nW.r.t evaluation, the authors propose to do the tasks of element type classification (called role labeling in the paper) and image-caption pair detection (i.e. a binary classifier). On both of these tasks, the only baseline presented is a gradient boosted decision tree, which performs quite well on both tasks already. This begs the question, are all these parameters used to learn CanvasEmb needed? In my head there are the following ways to answer this question:\\n1. Are there other higher complexity tasks (where the decision tree baseline would fail a lot) that are important for graphic design layout intelligence? \\n2. Are there other lower complexity (parameter wise) methods that also take 2D layout into consideration that you could train for these two tasks? For example, how would a simple convnet trained to classify UI elements perform? You could imagine a variety of ways of showing the convnet which element to classify, one example would be passing its bounding box mask as a separate input channel. How does CanvasEmb perform as compared to them?\\n\\nThe other evaluation presented is on layout retrieval and layout auto completion. Layout auto completion in this case is a misnomer (when related to previous work), since the authors only mask 1 attribute of 1 element at a time and compute the accuracy of predicting that attribute on test set examples. This differs strongly from layout auto completion in other work where complete elements along with their properties are added. Similarly for layout retrieval, a quick ad-hoc rating study is presented. \\n\\nIt is hard to position this work w.r.t. previous work given how the evaluation is performed. It would be great to evaluate this method in the context of existing work to really understand the importance of doing this sort of pre-training. Doing higher complexity tasks as mentioned above would also be very useful. For example, can you use CanvasEmb to offer layout suggestions (as in DesignScape, CHI 2015 as an example)? You can imagine a Gibbs Sampling like procedure where you repeatedly randomly erase a part of the parameters of the layout elements (i.e. their position, rotation, color etc.) and repredict them for some steps. Would this be able to generate plausible layout suggestions? \\n\\nOverall, I believe this is an exciting direction and I'm looking forward to hear how the authors think their paper could be evaluated on harder tasks, against stronger baselines and against existing work.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
nLYMajjctMh | Federated Learning of a Mixture of Global and Local Models | [
"Filip Hanzely",
"Peter Richtarik"
] | We propose a new optimization formulation for training federated learning models. The standard formulation has the form of an empirical risk minimization problem constructed to find a single global model trained from the private data stored across all participating devices. In contrast, our formulation seeks an explicit trade-off between this traditional global model and the local models, which can be learned by each device from its own private data without any communication. Further, we develop several efficient variants of SGD (with and without partial participation and with and without variance reduction) for solving the new formulation and prove communication complexity guarantees. Notably, our methods are similar but not identical to federated averaging / local SGD, thus shedding some light on the essence of the elusive method. In particular, our methods do not perform full averaging steps and instead merely take steps towards averaging. We argue for the benefits of this new paradigm for federated learning.
| [
"optimization",
"federated learning",
"personalization",
"local SGD"
] | Reject | https://openreview.net/pdf?id=nLYMajjctMh | https://openreview.net/forum?id=nLYMajjctMh | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"Wp7vGgux92e",
"LLbSRB6-AIo",
"n56zYRqKsT2",
"hVJ5fKkCWXe",
"faPNn6EWFJ6",
"yKGgV2fqiaH",
"MaQc5wy8Mg7",
"1nt_anxUt1X",
"FZHqZK3gKSu",
"4zne66bu82V",
"3evcG0lmFR",
"52bnQCEZ-ZN",
"miLKDykR_ik",
"ZAXPwDBqTy",
"NnacZ4JB1g3",
"1zGD6ZhbZJ8",
"LXAasl0rmvq",
"zGzn9-i-wtp",
"KPYcFDG0ti",
"R4cRgYjXq3",
"sDg6GZHRktx"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040378997,
1606300047295,
1606276184992,
1606267547793,
1606258227302,
1606255261535,
1606254643669,
1606254371798,
1606253775821,
1606252109879,
1606251170694,
1606247340392,
1606245873373,
1606244300454,
1606241194923,
1606211797629,
1606083550761,
1603944862716,
1603857393703,
1603778856878,
1603726439244
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3554/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3554/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3554/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3554/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3554/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3554/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3554/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3554/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3554/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3554/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3554/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3554/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3554/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3554/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3554/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3554/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3554/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3554/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3554/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3554/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"The paper studies an elegant formulation of personalized federated learning, which balances between a global model and locally trained models. It then analyzes algorithm variants inspired by local update SGD in this setting. The problem formulation using the explicit trade-off between model differences and global objective was received positively, as mentioned by R1 and R2. After a productive discussion including the authors and reviewers, unfortunately consensus remained that the paper remains below the bar in the current form. The contributions are not presented clearly enough in context, the set of competing algorithms (including e.g. EA-SGD, ADMM, SVRG/Scaffold for the heterogeneous setting, and others) needs to be clarified in particular for the modified formulation compared to traditional FL, since objectives are different. Some smaller concerns also remained on the applicability to more general non-convex settings in practice. We hope the feedback helps to strengthen the paper for a future occasion.\"}",
"{\"title\": \"Reply to: $\\\\lambda$ and third concern\", \"comment\": \"About $\\\\lambda$:\\n\\nYes, our theory does not provide guidance on how to choose $\\\\lambda$. We focus in this paper on many other aspects of our setup, but not on this one. The reason for this is not a simple oversight. In order to give guidelines on what $\\\\lambda$ to recommend, we would need to i) tackle the problem of generalization (as that is the ultimate arbiter here), ii) do extensive systems-level testing. These are nontrivial matters that require a separate paper. Our paper is not about generalization and is hence we are not able (i.e., we have no theory fo this) to recommend some $\\\\lambda$ other than another from this applied perspective. At present our recommendation is this: i) practitioner (somehow) decides on level of personalization and chooses $\\\\lambda$, ii) we then recommend training is done b solving our personalized FL formulation (1).\", \"third_concern\": \"Yes, what you say is absolutely right; we agree with this - this kind of a comparison would make perfect sense, and is something we were actually thinking of when writing the paper. However, we made a conscious choice not to further broaden the scope of this paper as we already had an abundance of results which stand on their own. So, we decided to keep this paper about optimization and not generalization and decided to keep testing against other personalization benchmarks for a future project. In fact, we have been working for some time now on exactly this problem. We wish to compare many FL methods from a personalization perspective, including ours. This is something that in our mind requires a dedicated paper. \\n\\nWe can think of many things that could be added to our work. The beauty of research is that there are always frontiers ahead, always something new to discover. But one ultimately needs to decide where one paper ends and another starts. We made the line here: we purposefully decided to not go nonocnvex in our paper, and we also decided not to go into benchmarking and generalization comparison in this work. We believe that the results we have obtained stand on their own. And we believe that our work will inspire others as well to explore these ideas further.\"}",
"{\"title\": \"About Figure 1\", \"comment\": \"The reason why the lines in Fig 1 were non-monotonic was the numerical errors. Specifically, we did not run the algorithm that estimates $x(\\\\lambda)$ for enough iterations when $\\\\lambda$ is large. After re-running the code with enough precision, we can see that both blue and orange curves are monotonic; there is no kink for large values of $\\\\lambda$. Thanks a lot for spotting this! Figure 1 is now fixed in the revised version of the manuscript.\"}",
"{\"title\": \"Response to atuhors\", \"comment\": \"Thank you for your response.\\n\\nRegarding $\\\\lambda$: \\nYes, I understand your point that $\\\\lambda=0$ is not good in practice since each user does not have enough data to train its own model. At the same time, if we just look at the theory (for instance, Theorem 3.1), $\\\\lambda=0$ seems to be the best choice as it gives the ultimate personalization. Hence, there appears to be a trade-off between choosing small $\\\\lambda$ and aiming for more personalized models and larger $\\\\lambda$ and somehow taking more advantage of all users' data over the distributed network. Hence, $\\\\lambda$ is the parameter that governs an underlying and important trade-off here. However, your result does not offer guidance on choosing $\\\\lambda$, which seems to be a critical missing piece in your proposed framework. \\n\\nRegarding the third concern (comparison with other works): \\nI understand that your formulation is different, but I still think you can compare your method with other works. Your paper and those related works all try to come up with different ideas and methods for adding personalization to federated learning. Each of these works ends up proposing a model to be implemented on each device. For instance, your paper suggests implementing $x_i$ for device $i$. Other paper suggests a different personalized model. But still, these methods can be compared, in terms of test accuracy (with an example with heterogeneous data distributions), or in terms of communication efficiency, etc. In other words, while I understand the formulations are different, the final outcomes can be compared to see which one obtains better personalization, which one is more efficient in terms of communication, etc.\"}",
"{\"title\": \"Some more clarification, hopefully useful.\", \"comment\": \"**I recognize this contribution. I can also see, from the perspective of other reviewers, how this may seem to make the contribution appear incremental. I think it could also have been communicated more clearly (e.g., moving the contents of footnote 1 to the main body of the paper).**\\n\\nWe believe most reviewers failed to see this contribution; and that's why they made the remarks they made. Some if it is clear from the reviews. We do not believe though providing such link is incremental. It's a big shift, in our opinion, in how we believe one should view the role of local steps in local methods. The introduction of virtually all FL papers mentions that local steps are designed due to communication complexity in mind. Virtually all authors also stress the importance of non-iid assumption. But in the space of non-iid data, there are no results that suggest local methods beat their non-local counterparts in communication complexity. Our paper is the first to do so, and we view this as a substantial and not incremental contribution. And we show how degree of personalization plays an important role here.\\n\\n**I fully agree that your paper already covers a lot of ground, and that including more results just for the sake of covering the non-convex setting would not be a good idea. I also comment you for not stopping at restricted non-convex classes like quasi-strong or PL. It's exciting to hear that you have additional results in this direction. The intention of my comment was more to point out that results on the non-convex setting may have drawn greater enthusiasm from this crowd.**\\n\\nYes, we are in agreement here. We can only hope the convex setting, and the potential for a transformation of thinking of what local steps are doing in FL optimizers that our work provides, will be appreciated as well.\\n\\n**It may be more of a limitation of the double-blind system adopted by ICLR, but unfortunately it doesn't matter whether your work was done sooner or was not influenced by those references (arxiv:2006.04735 and arxiv:2002.07839). They appeared more than a month before the ICLR submission deadline and they are very relevant to the topic of this paper, so they ought to be cited and discussed.**\\n\\nWe think this is a matter of philosophy behind how science is done. We are of the belief that earlier contributions should not be required to cite later contributions, unless of course, the earlier paper evolves and changed in time. We can add these citations, as clearly they are relevant. But we would find it prudent to mention that we have read the papers only after a first complete draft of our paper was ready. This would be a fair way of handing this. Happy to do so. Readers should not be confused by inverted timelines, and timelines matter.\\n\\n **Please elaborate on what you mean by \\\"additional computational insights\\\" here. The reply does not really address my question. You seem to agree intuitively with the point I mentioned. What happens for even larger values of $\\\\lambda$?**\\n\\nHappy to elaborate. We do not have any theoretical statements in the paper that would dictated how the functions $A:\\\\lambda\\\\mapsto ||x(\\\\lambda)-x(0)||^2$ and $B:\\\\lambda\\\\mapsto ||x(\\\\lambda)-x(\\\\infty)||^2$ should look like. \\nVector $x(\\\\infty)$ is not necessarily solving the problem $$\\\\max_{\\\\lambda \\\\geq 0} A(\\\\lambda).$$ So, some amount of \\\"oscillation\\\" is expected for large values of $\\\\lambda$. Perhaps this explains the kink. But we do not really know. We'll do an experiment with larger values of $\\\\lambda$ and report on our findings. We will probably not manage to do this this by the response deadline, but we will certainly update this in our paper, whether the paper is accepted or not. It's a good suggestion.\\n\\nSomewhat intuitively, one should expect $A$ to be \\\"increasing\\\" (in the sense that it starts big and eventually becomes small) and $B$ to be \\\"decreasing\\\" (in the sense that is starts small and eventually becomes big). But this does not necessarily have to mean (and we do not believe it does) monotonicity in a mathematical sense.\", \"we_do_have_statements_about_monotonic_behavior_of_other_quantities\": \"$f(x(\\\\lambda))$ and $\\\\psi(x(\\\\lambda))$. These influence the quantities $A$ and $B$ indirectly only.\\n\\n**This is sad. One can always strive for a balance. Theory papers that ignore important practical considerations don't necessarily contain meaningful theory either.**\\n\\nLet us be more precise as we believe there is a bit of misunderstanding here. We wanted to say the following: papers that strive for maximum applicability will contain minimal, trivial or no theory, as they need to deal with many hard practical and systems constraints that are simply beyond the reach of analytical tools. On the other hand, papers containing beautiful theory necessarily ignore many practical considerations, as only in this way can capture some isolated phenomenon analytically. Both of these extremes can be fine research. We personally have a preference for a balance.\"}",
"{\"title\": \"Thanks\", \"comment\": \"We'll do what we can with the time we have, and we promise to do the rest, if anything remains to be done, later. Please consider our responses as well as they give the explanations. Note that the changes needed to address issues raised by all reviewers are minor. Most issues are rooted in a fundamental misunderstanding of our paper (and in the case of one reviewer, we believe, in failing to read the paper altogether).\\n\\nThanks!\"}",
"{\"title\": \"No, your comments are not reasonable\", \"comment\": \"Yes, we criticize the review since we can recognize a bad review when we see it. Your is very bad. Uninformed, vague, not addressing anything that is actually contained in our paper, and yet critical. You create a strawman and attack it. We believe it is our duty to call this out.\\n\\nThe question mentioned in Paragraph 1 does not address our paper. It addresses and questions all of non-iid federated learning in its entirety. You are attacking a field here. You are asking questions which are not related to our paper. We do not deal with generalization, we do not compare to any centralized training mechanism. We solve an entirely different problem from the one you want our paper to solve. Please note you can't criticize paper on topic A (what we do) because it did not solve topic B (what you suggest we should investigate instead; which we do not). \\n\\nWe are open to constructive criticism that addresses specific issues in our paper. But no one can be expected to reply to vague and irrelevant comments. It's not possible.\", \"rev_3\": \"As a reviewer, do not just believe a reviewer. Read the paper and make up your own mind. As we argue in the reply to Rev 3, the criticism is based on misunderstanding, and not because there is an issue with our work. Please read our reply.\", \"re_experiments\": \"You say \\\"If the proposed convex optimization method can provide intuition for DNN-based algorithms...\\\" The premise is false, and hence the argument is false. We do not claim our method (here is another indicator you did not read or understand our paper - our paper is not primarily about a method) provides intuition for DNN-based algorithms. We do something that until now was not understood even in the convex case, and the correct scientific approach is to work on that first. Only then can one meaningfully attack the nonconvex problem. In the nonconvex regime, we would need to consider different methods with very different theory and parameter settings. The nonconvex case is beyond the scope of this work. We do not wish to run a heuristic, we want to develop some guiding theory first.\", \"code\": \"Thanks for the good suggestions.\", \"last_paragraphs\": \"Many of these questions are answered in the paper. Please read it. And many are irrelevant to our work. So, it is hard to respond to this.\\n\\nThe correct way to write a review is to be very detailed and specific about what is being criticized and why. Your review is on the opposite side of the spectrum, and as such ranks among the worst / least helpful / most confusing reviews we have ever read.\"}",
"{\"title\": \"Could you please upload an updated version of the paper?\", \"comment\": \"Thanks for the authors for the response! I'll reply to your responses in the near future. Besides, I notice that you mention that many concerns can be fixed easily and you will clarify some confusions in the final version of paper. But in order to better re-evaluate your paper, is that possible for you to upload an updated version during the rebuttal phase? Thanks!\\n\\nIf you didn't intend to submit another version during this phase, then please just ignore this comment. You don't need to do it in a hurry since the deadline is approaching.\"}",
"{\"title\": \"Acknowledging that I read your responses\", \"comment\": \"> So, we provide new links between local methods, personalization and communication complexity. This is the main novelty of our work explained as a high-level idea.\\n\\nI recognize this contribution. I can also see, from the perspective of other reviewers, how this may seem to make the contribution appear incremental. I think it could also have been communicated more clearly (e.g., moving the contents of footnote 1 to the main body of the paper).\\n\\n> There is too much to be said here, and much of it is different from the strongly convex case, so indeed, mixing the convex and nonconvex cases would not be a good idea.\\n\\nI fully agree that your paper already covers a lot of ground, and that including more results just for the sake of covering the non-convex setting would not be a good idea. I also comment you for not stopping at restricted non-convex classes like quasi-strong or PL. It's exciting to hear that you have additional results in this direction. The intention of my comment was more to point out that results on the non-convex setting may have drawn greater enthusiasm from this crowd.\\n\\n> Indeed, we are aware of that paper, however, we decided to not include it since our work was done sooner, and hence was not influenced by this paper.\\n\\nIt may be more of a limitation of the double-blind system adopted by ICLR, but unfortunately it doesn't matter whether your work was done sooner or was not influenced by those references (arxiv:2006.04735 and arxiv:2002.07839). They appeared more than a month before the ICLR submission deadline and they are very relevant to the topic of this paper, so they ought to be cited and discussed.\\n\\n> Intuitively, the blue curve should keep increasing and the orange curve should indeed keep decreasing as we increase lambda. However, we do not have a proof of that statement. So, this does not contradict our theory nut instead provided additional computational insights complementing our theory.\\n\\nPlease elaborate on what you mean by \\\"additional computational insights\\\" here. The reply does not really address my question. You seem to agree intuitively with the point I mentioned. What happens for even larger values of $\\\\lambda$?\\n\\n> Typically, papers that do theory will necessarily ignore some systems level considerations, and **papers that do not ignore such things typically can't contain any meaningful theory.**\\n\\nThis is sad. One can always strive for a balance. Theory papers that ignore important practical considerations don't necessarily contain *meaningful* theory either.\"}",
"{\"title\": \"Reply\", \"comment\": \"**Footnote 1**\\n\\nWhile the optimization problem that we study is not new on its own (we arrived at it independently and naturally by thinking about the role of local steps in FL methods), the link of such an objective with the FL is new. Specifically, the personalized FL objective (1) allows explaining the role of local steps as the local methods appear as natural applications of various SGD methods to (1). We get meaningful convergence rates, and in communication complexity, our approach shows (for the first time!) better performance than their non-local counterparts. So, we provide new links between local methods, personalization and communication complexity. This is the main novelty of our work explained as a high-level idea.\\n\\n**Limited fit** \\n\\nWe decided to focus on (strongly) convex optimization in this work since the results, in that case, are easier to interpret. Until now, local methods have never been proven to outperform their non-local cousins on problems with heterogeneous data even in the convex or strongly convex case. We believe that it is important to explain properly the convex case first before moving to the non-convex one. \\n\\nNote that we could consider a relaxation of a strong convexity such as quasi-strong convexity or the PL condition, and we can easily derive similar rates and claim that our results apply to such a (arguably limited) non-convex case. There is nothing preventing us from deriving rates for nonconvex functions. We did not do so as there is already a lot of material covered in the paper and it does not make sense to make the coverage even more dense. We plan to do this in a follow up work, and already have some results in this direction. There is too much to be said here, and much of it is different from the strongly convex case, so indeed, mixing the convex and nonconvex cases would not be a good idea.\\n\\n**Minor**\", \"re_the_motivation_point\": \"Thanks a lot for the reference! Indeed, we are aware of that paper, however, we decided to not include it since our work was done sooner, and hence was not influenced by this paper.\\n\\n**Re Fig 1**\\n\\nVery good question. Intuitively, the blue curve should keep increasing and the orange curve should indeed keep decreasing as we increase lambda. However, we do not have a proof of that statement. So, this does not contradict our theory nut instead provided additional computational insights complementing our theory.\\n\\n**Re privacy** \\n\\nWe agree with the reviewer that our current algorithm designs did not take the privacy into the consideration. While privacy is a very important aspect of FL; in this paper, we tackle different FL challenges and thus we ignore privacy issues. However, similar to the classical local SGD methods, our algorithms can be implemented in a private fashion as well using similar ideas as you have pointed out. We will add a remark about this in the paper; thanks! Typically, papers that do theory will necessarily ignore some systems level considerations, and papers that do not ignore such things typically can't contain any meaningful theory.\"}",
"{\"title\": \"Reply to \\\"New formulation\\\" and \\\"Experiments\\\"\", \"comment\": \"**New formulation**\\n\\n**Issue 1**\\n\\nRecall that $f(x)=\\\\frac{1}{n}\\\\sum_{i=1}^n f_i(x_i)$, where $x_1,\\\\dots,x_n\\\\in R^d$. Let $P(z) = \\\\frac{1}{n}\\\\sum_i f_i(z)$, where $z\\\\in R^d$. Recall that for $x=(x_1,...,x_n) \\\\in R^d \\\\times \\\\dots \\\\times R^d$ we have $\\\\bar{x}=\\\\frac{1}{n}\\\\sum_i x_i \\\\in R^d$. We claim that $$||\\\\nabla P(\\\\bar{x}(\\\\lambda))||^2 = O(1/\\\\lambda).$$ \\n\\nThis is the statement we referred to (and indeed, we did not include it in the paper). This means that as $\\\\lambda\\\\to \\\\infty$, the gradient of $P$ evaluated at $\\\\bar{x}(\\\\lambda)$ goes to zero. Since $P$ is strongly convex, and since $x(\\\\infty)$ is the unique solution of $\\\\min P,$ we must have $\\\\bar{x}(\\\\lambda) \\\\to x(\\\\infty)$.\\n\\n Let us prove the claim. First, observe that $$||\\\\nabla P(\\\\bar{x}(\\\\lambda))||^2 = ||\\\\frac{1}{n}\\\\sum_i \\\\nabla f_i(\\\\bar{x}(\\\\lambda) ) ||^2 = ||\\\\frac{1}{n}\\\\sum_i \\\\nabla f_i(\\\\bar{x}(\\\\lambda) ) - \\\\frac{1}{n}\\\\sum_i \\\\nabla f_i(x_i(\\\\lambda) ) ||^2,$$ where the last identity is due to Theorem 3.2 which says that $ \\\\frac{1}{n}\\\\sum_i \\\\nabla f_i(x_i(\\\\lambda) )=0$. By applying Jensen's inequality and Lipschitz continuity of functions $f_i$, we get $$||\\\\nabla P(\\\\bar{x}(\\\\lambda))||^2 \\\\leq \\\\frac{1}{n} \\\\sum_i || \\\\nabla f_i(\\\\bar{x}(\\\\lambda) ) - \\\\nabla f_i(x_i(\\\\lambda) ) ||^2 \\\\leq \\\\frac{L^2 }{n} \\\\sum_i ||\\\\bar{x}(\\\\lambda) - x_i(\\\\lambda)||^2 = 2 L^2 \\\\psi(x(\\\\lambda)).$$\\n\\nThe claim now follows by applying Theorem 3.1 which says that $\\\\psi(x(\\\\lambda))=O(1/\\\\lambda)$. More can be established than this using similar arguments. For instance, a bound involving individual $x_i(\\\\lambda)$ rather than $\\\\bar{x}(\\\\lambda)$, a $O(1/\\\\lambda)$ bound on squared distance to $x(\\\\infty)$ (the latter under an additional assumption).\\n\\nWe will clarify in camera ready. \\n\\n**Issues 2 and 3**\\n\\nIndeed, the formulation itself was already considered earlier in the literature; we mention this already in footnote 1 (we will add reference [1]; thanks!). Having said that, i) we did not borrow this formulation from anywhere, we came to it through naturally our desire to understand the meaning of local steps in FL optimizers, ii) the formulation was never considered in the context of federated learning or local methods in particular, and it is this context and what we manage to explain what is important. \\n\\nThe approach from [1] only takes a *single* local step in between of the communication rounds regardless of the value of the regularizer -- we have demonstrated that this is suboptimal and one should do more local work in between of the communication rounds. \\n\\n**Experiments**\\n\\nWe *very strongly* disagree with the claim that \\\"Most theoretical claims are not validated empirically.\\\" Only one of our experiments is designed to show the importance of variance reduction. In the appendix, we also study other aspects such as the effect of parameter $\\\\lambda$ on the convergence speed and the effect of the communication frequency $p$ on the convergence speed (this is also demonstrated in Fig. 2). The reviewer missed this.\\n\\n\\n[1] Zhang et al. Deep learning with elastic averaging SGD. NeurIPS 2015.\\n[2] Wang et al. Overlap Local-SGD: An algorithmic approach to hide communication delay in distributed SGD. ICASSP 2020.\\n[3] Yu et al. On the linear speedup analysis of communication efficient momentum sgd for distributed non-convex optimization. ICML 2019.\"}",
"{\"title\": \"Reply to \\\"Convergence theory\\\"\", \"comment\": \"**Convergence theory**\\n\\nIndeed, we do ignore the second term in the convergence theorem of LGD. This is done *on purpose* because it is not important. Why? Because later on in the paper (Sec 5. and the appendix) we propose a way to remove the neighborhood (i.e., effect of $c$) completely (see footnote 5). This new variant of LGD is somewhat more complicated (it's a variance reduced SGD and not simple SGD), and we did not want to make the main body of the paper more complicated than needed to make our point. Indeed, LGD offers just enough structure for us to be able to make most of our main points clear. So, all our conclusions are valid, but some need to be seen as conclusions that apply to the more complicated methods we describe later on. \\n\\nWe see that there is a mistake in the sentence above equation (9) (however, this is clarified by equation (9)): the $\\\\epsilon$-neighborhood should be replaced by $\\\\epsilon\\\\|x^0-x(\\\\lambda) \\\\|^2 + \\\\frac{2n\\\\alpha \\\\sigma^2}{\\\\mu}$. Similarly, we will stress the target neighborhood size in the last paragraph of Sec 4. \\n\\nLastly, we would like to stress that the statements of Theorem 4.1 and Corollary 4.3 are correct. Note that linear convergence of SGD *with a fixed stepsize* to a $O(stepsize)$ neighborhood of the optimum is a classical result, and not an issue with our analysis. See the paper of Gower et al (2019) whose theorem we are applying. If one wants to make the neighborhood small, e.g., $O(\\\\epsilon)$, there are known techniques to achieve this. For instance, one may use a decreasing stepsize. We do not do this since this reduced the linear rate to a sublinear rate. Another approach is to choose a small stepsize. Indeed, if we set $\\\\alpha = \\\\frac{\\\\epsilon \\\\mu}{4n \\\\sigma^2}$, then the constant error/neighborhood term in Eq (9) becomes bounded by $\\\\epsilon/2$, and one can achieve $\\\\epsilon$ accuracy as long as $k=O(\\\\frac{1}{\\\\epsilon} \\\\log \\\\frac{1}{\\\\epsilon})$. Again, this makes the method slower than linear. Instead, we employ control variates (see Section 5) to remove this term completely (notice that Corollary 5.2 does not have any such term). However, we made a conscious choice to use LGD as the model method for explaining our main contributions, as we believe this will be more easily understood by more people. \\n\\nWe will make this more clear in the camera ready version of the paper.\"}",
"{\"title\": \"Reply to \\\"New algorithm: Loopless LGD\\\"\", \"comment\": \"**New algorithm: Loopless LGD**\\n\\n**Issue 1: LGD is not new**\\n\\nWe thank the reviewer for the reference [2]: good find! Indeed, the method is similar to the simplest of our methods (LGD) as it is a variant of local SGD which performs a step towards the averaging instead of the full averaging. We will properly cite the mentioned paper in the revised version, thanks! \\n\\nWhile the two methods (L2GD and the methods from [2]) are very similar in terms of the statement of the algorithm, this does not in any way decrease our key contributions as our main contributions are not algorithmic but methodological.\\n\\nWe do design several methods in the paper, and LGD is the simplest of them all. We use it precisely because it offers a *very simple model example of a local method*, with just the right amount of structure, and without additional features our more advanced methods provide (see the appendix), enabling us to explain our main contributions clearly. We have developed and described much more advanced methods than LGD and the method in [2], but we start with LGD to make the narrative and explanations to be as simple and clear as possible. Now that we know LGD is similar to [2], we will mention this in the paper, but our narrative and contributions would not be diminished. After all, LGD is just SGD with importance sampling applied to (1) seen as a 2-sum problem, and hence is not a new method! Its interpretation and connection to FL is what matters. \\n\\nOne can ask what is the advantage of our approach over the classical FL methods. The simple answer is: *we don't compete against them; we explain them!* Furthermore, most of the extensions we do in the appendix were not done in the classical FL setup; this is yet another contribution, albeit a minor one when compared to our key contribution: exhibiting the first link between local methods, personalization and communication efficiency. We believe this is of major import to the FL community. \\n\\n**Issue 2: Importance of random number of local steps**\\n\\nThe fact that LGD uses a random number of local steps is not important. It is just *convenient* since it allows us to generate local methods as special cases of (existing and novel variants of) SGD applied to (1). It may be the case that a random number of local steps leads to slightly better bounds (e.g., in constants). However, we did not investigate this as this was not important for the purposes of our paper. However, the dependence on the rate of convergence (i.e., on $\\\\varepsilon$) is certainly not affected by the use of random vs fixed number of local steps.\\n\\nNote that the method from [2] was analyzed as method for solving the classical FL objective ($\\\\lambda=+\\\\infty$).\"}",
"{\"title\": \"Reply to \\\"New insights about the role of local steps\\\"\", \"comment\": \"We thank the reviewer for the many questions and issues raised. Appreciated. Some are easily addressable and very minor, as we explain below. All remaining issues come from a fundamental misunderstanding by the reviewer of what our paper's key contributions are. We are happy to explain.\\n\\nWe kindly ask the reviewer to read our response and reconsider their score, which we believe is unjustified. \\n\\n**New insights about the role of local steps: Issue 1**\\n\\nWe start with this point because we believe that it the most crucial, and it would be a major issue if the mentioned criticism was valid. It is not. We insist that the claim \\\"the role of local steps in gradient type methods is not to reduce communication complexity, as is generally believed. Indeed, there is no theoretical result supporting this claim in the key heterogeneous data regime\\\"\\nis correct.\\n\\nNote that the provided reference [3] {\\\\bf does not consider the data heterogeneous setup.}; [3] considers either identical data or a bounded dissimilarity between local gradients (equation (3) therein) over the whole domain. Under such data similarity assumption, local SGD can outperform classical SGD (similar results have been provided in a convex case as well). \\n\\nHowever, and we do make this very clear in the paper, we are concerned with the much more difficult and important to FL setting: the data heterogeneous setting. That is, we do not use any kind of data similarity assumption whatsoever!\\n\\nOur paper is the first to show that (variants of) local SGD can outperform synchronous SGD in communication complexity *if no data similarity is assumed*. We show that this is the case when one aims to solve our new FL formulation (1), which happens to have a natural interpretation as a personalized FL formulation, with personalization level controlled by a single global parameter $\\\\lambda$. In other words, we interpret the purpose of local steps: we claim they indeed do help in reducing communication complexity even in the heterogeneous data regime (and are the first to prove so), but in order for this to happen, we need to see them as methods for solving the personalized FL formulation (1) we propose. Moreover, as we show, (some variants of known and new) local methods can indeed be seen as methods for solving (1). We spend considerable real estate in the main paper to explain this on what we believe is the simplest of all local methods: local gradient descent (LGD). Indeed, LGD can be seen as SGD with importance sampling applied to (1) seen as a 2-sum problem. This alone, we believe, offers new conceptual insights to the FL community. \\n\\nWe stress that in the practical FL applications, the often invoked bounded data dissimilarity assumption (between local gradients over the whole domain) does not hold: in such a case, local GD methods do not provide any benefit over their non-local cousins, yet they are still the most prominent FL optimizers. \\n\\n**New insights about the role of local steps: Issue 2**\\n\\nWe did not intend this sentence to be interpreted the way you interpreted it. It was intended as an informal \\\"intuitive\\\" statement capturing some of the essence of our findings. It is not precise and was not intended to be. We will find a way to reformulate this sentence to avoid the kinds of confusion you are pointing out. However, notice this is a very minor point about a single sentence which is easily fixable.\"}",
"{\"title\": \"First concern does not address our key contribution; and the second and third concerns are easily explained - not an issue with our work\", \"comment\": \"First concern.\\n\\nPenalization is an old and well studied technique in optimization, and of course, we are not claiming novelty of this type. Similar penalties were considered earlier in the literature in very different contexts; see footnote 1. Our main insight is that the formulation we propose is *new and particularly meaningful in the context federated learning (FL), resolving or at least giving important insights to several key issues in FL*. We spend a considerable space in the paper to explain this. Let us reiterate some of these points here:\\n\\nDespite a significant effort by the FL community, and despite the fact that this was the original and (still is!) motivation for their development, local methods (e.g., LGD) were never proven to reduce communication complexity when compared to their non-local counterparts in the heterogeneous data regime (see Woodworth et al. 2020 https://arxiv.org/pdf/2006.04735.pdf). There are also counterexamples which show this can't be done in general, even if one considers just convex quadratic functions. The belief that local steps are performed to reduced communication complexity is, we argue, one of the key confusions the FL community seems to suffer from, and our work is trying to remedy this situation. One can make several possible conclusions from this: i) local methods have better communication complexity, but we still do not know why as our analysis tools can't fully explain how well they work, ii) local methods are not actually beating non-local methods in terms of communication complexity, as it is universally asserted, and hence should be replaced by better performing methods, iii) local methods are good at solving a different problem of crucial importance to FL, but we do not know what problem it is. \\n\\nOur work is motivated by mounting evidence that ii) is true (in the heterogeneous data setting), which motivated us to think about iii) as a possible solution. Indeed, we manage to show that if one thinks of local methods as methods for solving our FL formulation (1) instead, then they become superior to non-local methods in communication complexity! This is the first time such superiority of local methods is shown to nonlocal methods in the heterogeneous data regime, which is the most important regime for FL! Hence, we believe our work is a major conceptual breakthrough in the field. See lines 108-116: communication complexity of LGD decreases to 0 as $\\\\lambda \\\\to 0$. So, if one aims for more personalization (corresponding to small $\\\\lambda$), then local methods will have better and better complexity. In the extreme case when one requires absolute personalization ($\\\\lambda=0$), each device is simply training a model from their own data only, and no communication is needed. So, this makes very good intuitive sense. Note that, for example, LGD (one of the most basic variants of FedAvg) can be interpreted as a SGD with importance sampling applied to (1) seen as a 2-sum problem. Yes, we did not have to come up with a new analysis for SGD in this case; we clearly explain this in the paper. So, our novelty here is not in the analysis, which, as you say and as we say in the paper, simply follows from Gower et al (2019). The novelty is the insights that SGD applied to (1) in the way we do it generates LGD! So, the mystery of the utility of local steps evaporates: they are there to put more emphasis on $f$ instead of the penalty. However, more emphasis of this type is desired precisely when $\\\\lambda$ is small, i.e., when we require more personalization. So, the key novelty of our paper is that we connect personalization, communication complexity and local methods together, in an insightful manner. Our several new local methods are a secondary contribution.\\n\\nSecond concern.\\n\\nWe study the effect of how $\\\\lambda$ influences the convergence rate and the optimal # of local steps. We do not study which choice of $\\\\lambda$ is better from a generalization perspective and hence we can't give a theoretical prescription to practitioners of this type. \\n\\nWhat we mean by saying that \\\"such purely models are rarely useful\\\" is this: in practice, devices do not have enough data to be able to train models using their own data only, which is why we need to resort to distributed methods such as FedAvg or LGD. If they had enough data, $\\\\lambda=0$ would be a perfectly fine choice, and there would be no need to do any communication. So, $\\\\lambda=0$ would be optimal and FL in such a data-rich regime would simply reduce to $n$ independent training problems performed by the devices independently. Yes, $\\\\lambda =0$ leads to the smallest training loss. However, this does not mean we would get the smallest testing loss.\\n\\nThird concern.\", \"the_reason_is_simple\": \"the other methods solve a different optimization problem and hence are not comparable. Note that none of these methods outperform their non-local cousins in terms of the comm. complexity in the heterog. data setting.\"}",
"{\"title\": \"My comments are reasonable\", \"comment\": \"The authors solely criticize my comments but do not answer questions in depth. Let me make my comments more concrete.\\n\\nThe fundamental question in Paragraph 1 is not superficial. Please answer it in depth. If the accuracy loss is high in non-IID setting, why don't we directly centralized encrypted data to a secure data center and delete data after completing the training? We need to solve the hard core problem rather than say you have many contributions. From other reviewers' comments, I believe my judgement (See R3). \\n\\nI don't agree with the authors' comments in deep learning. If the proposed convex optimization method can provide intuition for DNN-based algorithms, please demonstrate in a small CNN/RNN. LibSVM is a toy dataset nowadays, thus the experiments cannot convince me at all, even in theory.\", \"code\": \"It seems the authors update the code recently. This time I think the code is clear to me. But I still have some suggestions:\\n1) make the variable readable using a longer name. \\n2) warp the key contribution/algorithm/feature with a small function/class, so others can directly reuse it without the need to do code extraction.\\n3) Add more comments and make a good README.md file, telling readers what's the functionality of each class and file.\\n\\nMy comments in the last paragraph means a better presentation to understand your work, although I use many questions to express my concern. Please make the applicability of the proposed method more clear to readers in revision.\"}",
"{\"title\": \"This review is superficial and utterly confused.\", \"comment\": [\"This review does not seem to address any substance actually contained in our paper. Instead, it offers general philosophical thoughts related to the general theme of our paper. It is not possible to meaningfully respond to a review of this type. These kids of reviews are not helpful, and actually inflict quite a bit of harm on the community.\", \"\\\"The bound seems not tight.\\\" What bound? We have many. We believe we offer several efficient algorithms, starting with simple ones which are easier to understand (for pedagogical/clarity reasons), and gradually adding features and enhancements (e.g., adding control variates, partial participation and so on). Your comment is generic and does not address our work. It could have been made without actually reading our paper at all. This comment should be ignored by the AC.\", \"First paragraph in weaknesses: you ask some questions but our paper is not about this. Our contributions are clearly stated and you do not refer to them. You do not seem to have a genuine interest in what we actually accomplished.\", \"Experiments: The results are not weak. Our theory is for convex problems, and the methods are fine-tuned for convex problems. We test the theoretical predictions with carefully designed experiments and observe that our theory predicts what happens in experiments very well. This is the ideal scenario of any scientific work containing theory. The experiments are strong. Testing our methods in the nonconvex regime does not make much sense; we would need to first develop the associated theory, and this is beyond the scope of the current paper. Not all FL tasks are deep learning tasks.\", \"Code: You criticize the readability of our code, but offer no concrete evidence of what is wrong. Again, this is a generic comment that could have been made about any paper containing a method. This kind of a comment is not helpful. If you found concrete issues, list them. We are happy to correct and improve our paper, but we can't do this if issues are not pointed out to us.\", \"Last paragraph: These are again comments divorced from the actual contents and contributions of our paper. It is very clear to us that this reviewer simply failed to understand our work and contributions. Or perhaps the reviewer even did not read the work properly. In any case, it is not possible to respond meaningfully to a review that fails to address the evaluated paper to such a degree as this review manages to do so.\"]}",
"{\"title\": \"Authors propose a model personalization method for federated learning with a new optimization formulation which provides an explicit trade-off between the global and local models. In addition, the authors develop several efficient variants of SGD for solving the new formulation and prove communication complexity guarantees.\", \"review\": \"***Strong\\n\\nPersonalization is a hardcore problem in FL. The authors target an important problem.\\n\\nThe theory analysis seems correct, but the bound seems not tight enough.\\n\\n***Weakness\\n\\nRecently, there are many personalized methods proposed for FL. In the I.I.D. setting, local SGD training (FedAvg) can obtain similar accuracy as centralized training with a theory guarantee. But in the non-I.I.D. setting, I am curious to know what\\u2019s the ultimate goal of optimization. Can the proposed personalized methods obtain accuracy comparable to centralized training? If we do not compare with the centralized accuracy, how can we know the optimized personalized model can obtain sufficient accuracy for practical applications.\\n\\nThe experimental results are weak. The authors only provide results on the LR model for toy datasets (LibSVM). Without non-convex experiments, it is hard to believe the proposed method works in practice given that DNN-based models dominate nearly all ML tasks.\\n\\nThe code style and readability are poor, which discourages the popularity of the proposed method.\\n\\nAlthough the authors mentioned some contributions of the proposed method, I still cannot get what\\u2019s the advantages of the new formulation over conventional Federated optimization? What is its limitation? What\\u2019s the cost to use this method? When should we choose this algorithm in practice? In which degree of non-IIDness? Playing with optimization analysis tricks won\\u2019t solve the personalized challenge of federated learning in practice.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Well-written paper; Concerns regarding novelty of the formulation and analysis, the role of penalty parameter, and comparison with related works.\", \"review\": \"Summary of the paper:\\u00a0The paper proposes a new formulation for the federated learning problem, in which each agent has its local model, and a penalty term is added to the objective function to control the deviation of these local models from their average. Next, the authors develop a randomized algorithm to tackle this problem and characterize its convergence under several assumptions, such as smoothness and strong convexity. They also discuss variants of their algorithm, which uses variance reduction techniques or considers users' partial participation.\\n\\n\\nThe paper is well-written, and the goals, problem formulation, and contributions are all explained in detail. However, the reviewer has a number of concerns, which are listed below:\\n\\n\\n\\nThe first concern is regarding the novelty of the formulation or analysis. The idea of this paper's formulation, giving a copy to each agent and adding a regularizer to keep the copies close, has been discussed in the distributed literature, for instance, in ADMM. Moreover, it is not clear which part of the analysis is novel or challenging. It seems that the authors use an unbiased estimator to solve an optimization problem with a smooth and strongly convex objective function. This setting has been studied extensively in the literature, including applying variance reduction techniques.\\n\\n\\n\\nSecond, the regularizer parameter $\\\\lambda$ seems to be at the heart of this framework. With $\\\\lambda=\\\\infty$, the problem reduces to the classic federated learning setting, and when $\\\\lambda=0$, the formulation boils down to the case that each agent solves its own problem. In particular, for the latter, the authors claim that \\\"such purely local models are rarely useful.\\\" However, this claim's reasoning is not clear; for instance, from a theoretical point of view, results such as Theorem 3.1 suggest that having $\\\\lambda=0$ will lead to the minimum loss $f$. In other words, it is not clear how we should compare the different trained models from setting different values for $\\\\lambda$, and which range of $\\\\lambda$ leads to a good model with respect to that measure.\\n\\n\\n\\nThird, as stated in the introduction, several methods have been recently proposed to address the heterogeneous case or achieve personalization in the federated learning problem. I wonder why the authors have only compared their methods against each other in experiments and have not included those methods for comparison.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review #2\", \"review\": \"## Summary\\n\\nThis paper proposed a new formulation of federated learning, which balances between traditional global model and purely local models. The authors discuss the advantages of the new formulation and propose a new algorithm L2GD to solve the problem. They theoretically analyzed the communication complexity of L2GD under stronly-convex settings and propose several algorithmic variants.\\n\\n## Pros\\n1. The authors developed a set of algorithms based on L2GD and provided theoretical analysis. The efforts are appreciated.\\n\\n\\n## Cons\\n\\nUnfortunately, most contributions of this paper doesn't make sense to me. I have concerns regarding novelty and correctness of the statements made by this paper. Detailed comments are listed as follows.\\n\\n**New formulation**\\n1. In section 2, the authors claim that \\\"we prove that the optimal local models converge to the traditional global model characterized by (1) at the rate $O(1/\\\\lambda)$\\\". However, I didn't find any discussions or proof around this statement in following sections. From my understanding, there should be some equations showing how $x(\\\\lambda)-x(\\\\infty)$ or $f(x(\\\\lambda))-f(x(\\\\infty))$ changes over $\\\\lambda$. But I didn't find any.\\n2. In experiments, the authors didn't compare the proposed formulation with the original formulation. How much benefits one can obtain from the new formulation is unclear, making this paper to be incomplete.\\n3. The formulation is not new. A nearly same formulation can be found in [1]. The authors didn't notice this paper. Since [1] also proposed an algorithm EASGD to solve the new formulation, the authors are supposed to compare L2GD with the algorithm in [1].\\n\\n**New algorithm: Loopless LGD**\\n1. The algorithm is also not new. An extremely similar algorithm has appeared in [2]. By some re-parameterization, I believe they are equivalent to each other. The authors missed this reference. They should justify the differences and compare the results.\\n2. It is unclear why L2GD uses random local steps instead of a fixed one. Or what are the benefits of the randomized one?\\n\\n**Convergence theory**\\n1. The conclusions of convergence analysis are questionable. In particular, when obtaining the convergence rate, it seems that the authors completely ignore the second term in (9). In general, people use $f(x)-f(x_*)<\\\\epsilon$ or $\\\\|x-x_* \\\\| < \\\\epsilon$ to define the $\\\\epsilon$-neighborhood of the optimum. However, in this paper, the authors use $\\\\|x-x_* \\\\| < \\\\epsilon \\\\|x_0 - x_* \\\\| + c$ to define the neighborhood and just ignore the second term when deriving the rate. Under this definition, they draw the conclusion that L2GD can improve the communication complexity of GD. This can be misleading and questionable. In GD, we don't have the second term in (9) at all.\\n2. Similarly, when obtaining the best value of $p$, the authors only optimize the first term. However, the second term in (9) also depend on $p$. The authors seem to ignore it again.\\n\\n**New insights about the role of local steps**\\n1. The authors state that \\\"the role of local steps in gradient type methods is not to reduce communication complexity, as is generally believed. Indeed, there is no theoretical result supporting this claim in the key heterogeneous data regime.\\\" This statement is not true. It has been shown in literature (eg, [3]) that local SGD can achieve the same rate $1/\\\\sqrt{n K}$ as synchronous SGD but only uses $O(n^{3/4} T^{3/4})$ communication rounds, while synchronous SGD uses $T$ rounds.\\n2. \\\"The more local steps are taken, the more we bias the method towards the purely local models.\\\" I feel we cannot draw this conclusion from this paper's analysis. In particular, in this paper, the choice of local steps is controlled by the parameter $\\\\lambda$ (expected local steps = $1+L/\\\\lambda$). When we set a small $\\\\lambda$, we get two consequences: (1) the formulation will emphasize more on $f(x)$, and hence, the solution is biased towards purely local models. (2) the \\\"optimal\\\" local steps derived in this paper becomes larger. Obviously, these two consequences are parallel to each other. We cannot say the second point is the reason of the first point. Instead of setting the expected local steps to be $1+L/\\\\lambda$, one can also use other values which won't influence the final solution.\\n\\n**Experiments**\\nThe experimental results can only show the importance of variance reduction, which seems to be a minor contribution of the paper. Most theoretical claims are not validated empirically.\\n\\n## Post-rebuttal\\nThanks the authors for the clarifications! I appreciate it. However, some of my concerns are not addressed. \\n- The main concern I have is about the new insight on local update methods. Basically, the author obtain the insights based on a newly proposed algorithm (L2GD, let's call this algorithm B) and a new problem formulation (let's call this formulation B). However, they want to apply the insight from algorithm B and formulation B to algorithm A (original local update methods) and formulation A (original FL formulation). It is obvious that one cannot draw this conclusion because both the algorithm and the formulation are different.\\n- Second, as I stated in the original review, I don't think one can obtain the insights from the analyses in this paper. The author didn't directly answer my question and just said \\\"they didn't expect people to interpret it in this way\\\". But it is still unclear how to correctly understand their insights.\\n\\nBased on the above two points, I strongly feel that their main insights about the effects of local updates should be further and carefully examined. The current version could be misleading. Besides, I also have the following minor concerns:\\n\\n- The authors claim that [Yu et al. ICML 2019] didn't consider the heterogeneous setting. This is not true. Although [Yu et al. ICML2019] assumes that the gradient dissimilarity is uniformly bounded (which is widely used in literature), their setting is still non-iid setting. It's unfair to say that they only study the IID data setting. So the second motivation of this paper does not make sense to me. The authors oversell their contribution. More precisely, their contribution is not the first proof under data heterogeneous setting but should be the new proof without data similarity assumption.\\n- \\\"non-local cousins\\\" is unclear and hasn't been properly defined in the paper. For local SGD with mini-batch size , local steps and clients, there are two non-local cousins: (a) SGD with mini-batch size ; and (b) SGD with mini-batch size . It seems that the authors misused these two algorithms. In the response, they agree that [Yu et al. ICML 2019] proves \\\"with data dissimilarity assumption, local SGD can improve the communication complexity of classical SGD\\\". Here, classical SGD refers to algorithm (a). In the updated paper, they cite two papers from Woodworth et al. to support their claim. However, the non-local methods in Woodworth et al. is algorithm (b). The authors should formally define which non-local algorithm they want to compare with.\\n- In the paper, the authors claim that they prove for the first time local methods can improve the communication complexity of the non-local cousins. However, this statement is overselling. The more precise version is that they prove that the variance-reduced version of local methods can improve the communication complexity of the vanilla non-local version algorithms.\\n- It seems that the authors want to claim a lot of contributions in this single paper and they didn't organize these contributions well. Hence, it causes difficulties for readers to understand their true novelty. I recommend the authors to rewrite the paper and carefully consider the paper structure. For example, if I understand correctly, the main contribution of this paper should be the insights on local updates. However, the authors didn't show any experiments on this insight in the main paper (they put them in the appendix). Instead, they just validate the effect of variance reduction in the main paper, which is just a minor point. Also, in the introduction, there is a long paragraph to introduce L2GD as one of the main contributions. However, as discussed in the responses, L2GD is not a new algorithm. The authors don't need to give it so much emphasize or should not claim it as one contribution.\\n- Also, in [1] EASGD does use multiple local steps. The authors should compare L2GD with EASGD, as they both are designed to minimize the new formulation.\\n\\n## References\\n\\n[1] Zhang et al. Deep learning with elastic averaging SGD. NeurIPS 2015.\\n\\n[2] Wang et al. Overlap Local-SGD: An algorithmic approach to hide communication delay in distributed SGD. ICASSP 2020.\\n\\n[3] Yu et al. On the linear speedup analysis of communication efficient momentum sgd for distributed non-convex optimization. ICML 2019.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Interesting insights into federated learning, possibly limited by the focus on strong convexity\", \"review\": \"Interesting insights into federated learning, possibly limited by the focus on strong convexity\\n\\nThis paper considers distributed training problems arising in the context of federated learning. It proposes a novel framing of the problem as a compromise between fitting a model locally to the data available at a device, and fitting a model globally to the data from all devices. This leads to the so-called loopless local gradient descent (L2GD) method, which is loosely related to the popular FedAvg/LocalSGD method, and also loosely related to a randomized version of the well-studied ADMM for consensus optimization problems. \\n\\nThe paper provides theoretical convergence guarantees for L2GD and related variance-reduced versions L2GD+ and L2GD++. As is pointed out in footnote 1, the ideas underlying the L2GD and its analysis have been previously explored in the literature. It would be worth clarifying in the paper (in footnote 1 or elsewhere) what aspects of L2GD and its analysis which are novel and not completely subsumed by the previous works Liu et al. (2017) or Wang et al. (2018). The last sentence of footnote 1 is somewhat in this direction, but is not very precise.\\n\\nI generally think the perspective proposed in this paper, along with the L2GD method, are novel and interesting, and I expect they will be useful to researchers working on federated learning. The main limitation of the work, in my view, is the limited fit and interest to the broader ICLR audience. For example, the CFP emphasizes the importance of non-convex optimization, and deep learning methods and architectures for representation learning. On the other hand, this paper focuses on convex models, both for the analysis (smooth and strongly convex) and in the experiments (l2-regularized logistic regression). The analysis techniques do not appear to make use of strong global structure (beyond the definition of strong convexity). Is there reason to believe it will not be possible to provide local convergence guarantees for L2GD under more relaxed assumptions (in particular, without assuming convexity)?\", \"a_few_other_minor_points\": [\"Regarding the second motivating point, see also the recent work of Woodworth et al. (arxiv:2006.04735 and arxiv:2002.07839).\", \"Is there any intuition why, in Fig 1, when $\\\\lambda$ approaches $10^{1}$, the blue curve begins to decrease and orange curve begins to increase? (I.e., why the non-monotonic behaviour?)\", \"Minor nit.: Alg 1 seems to violate the important notion in FL that devices never reveal their private information (e.g., their local decision variables or gradients) directly to the centralized master. Rather than saying that (8) is implemented at the master, would it make more sense for each device to receive $\\\\bar{x}^k$ from the Master and implement (8) locally? (The averaged model can be computed in a secure way using DP and secure aggregation, much the same as it is in current FL implementations.)\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
boZj4g3Jocj | Learning to communicate through imagination with model-based deep multi-agent reinforcement learning | [
"Arnu Pretorius",
"Scott Cameron",
"Andries Petrus Smit",
"Elan van Biljon",
"Lawrence Francis",
"Femi Azeez",
"Alexandre Laterre",
"Karim Beguir"
] | The human imagination is an integral component of our intelligence. Furthermore, the core utility of our imagination is deeply coupled with communication. Language, argued to have been developed through complex interaction within growing collective societies serves as an instruction to the imagination, giving us the ability to share abstract mental representations and perform joint spatiotemporal planning. In this paper, we explore communication through imagination with multi-agent reinforcement learning. Specifically, we develop a model-based approach where agents jointly plan through recurrent communication of their respective predictions of the future. Each agent has access to a learned world model capable of producing model rollouts of future states and predicted rewards, conditioned on the actions sampled from the agent's policy. These rollouts are then encoded into messages and used to learn a communication protocol during training via differentiable message passing. We highlight the benefits of our model-based approach, compared to a set of strong baselines, by developing a set of specialised experiments using novel as well as well-known multi-agent environments. | [
"imagination",
"deep",
"reinforcement",
"communication",
"agent",
"set",
"human imagination",
"integral component",
"intelligence",
"core utility"
] | Reject | https://openreview.net/pdf?id=boZj4g3Jocj | https://openreview.net/forum?id=boZj4g3Jocj | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"eL40rEFbm8w",
"cpG8cCYuoD6",
"TM1J4kPTxnU",
"bAzXVsu0aHt",
"hD-pR4tcjv1",
"MpVs55kCNJr"
],
"note_type": [
"decision",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040402620,
1606231000666,
1603919979803,
1603886836789,
1603744387479,
1603709112109
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3553/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3553/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3553/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3553/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3553/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"The authors present a model-based method for cooperative multi-agent reinforcement learning and propose to use communication of future predictions (as given by a learned world model) as a way to overcome partial observability.\\n\\nOverall, all reviewers found this work to be of great interest and the combination of planning + communication novel. However, all reviewers pointed that the claims that the papers makes are not fully supported by the experimental framing of the paper pointing to several shortcomings around experimental design in general and better control of appropriate baselines. The authors have since clarified several aspects in their paper and also included a new RL environment. \\n\\nHowever, as the paper still stands does not fully provide convincing evidence of their proposal, which is however very intriguing. I would like to echo though reviewers' suggestions that the authors work a bit more on the experimental design and I really hope this work will appear at a later venue.\"}",
"{\"title\": \"Response to comments\", \"comment\": \"We would like to start by thanking every reviewer for their valuable feedback. We have taken note of everything that was said and made the necessary improvements to our system. As can be seen in our updated paper, in Figure 4 and 5, the MACI algorithm is more stable than before, while still outperforming all other algorithms we tested against. This is due to our system now training the world model, policy and encoder in an iterative fashion. Previously the world model was only trained once. We provide a more detailed algorithmic setup of our new MACI algorithm on page 6.\\n\\nSome of the reviewers rightfully pointed out that our digit game environment was quite limiting as agent actions did not influence future observations. We, therefore, created a new grid world environment that is much closer to the original RL setting. This environment allows us to test communication and navigation, where every action can influence future observations of all agents. With our new algorithm, we are able to scale beyond our previous limitation of 2 agents. We see that 4 agents can effectively communicate in this grid world. We also see that a combination of communication and a world model is needed for good performance. \\n\\nThe reviewers also pointed out that our paper lacks details on network architectures used for this work. We, therefore, added an Appendix with these specifications. Unfortunately, due to our main author falling sick, we did not get to address all the envisioned improvements and write them up in time. We will, however, keep on improving the algorithm in the coming weeks and release our update results. \\n\\nThank you for your time and consideration.\"}",
"{\"title\": \"Insufficient evidence for an otherwise interesting take on MARL algorithms\", \"review\": \"The paper talks about developing a model-based method for cooperative multi-agent reinforcement learning. The proposed approach utilizes communication as a tool for mitigating the partial observability induced by the non-stationary task while also helping agents reason about other agents' behaviors. The authors present their motivation for using language as a medium in model-based RL stemming from early literature in psychology and linguistics.\\n\\nThe setup consists of decentralized agents each of which is equipped with a world model similar to Ha et al. 2018. Further, each agent also has a separate message input that is received from the other players. Each agent does a form of decision-time planning where it produces rollouts for K steps before taking a real action. The message is then the encoding produced by the concatenation of the observations, rewards, and the actions taken during the rollouts.\\n\\nThe approach is novel and one of the first works that combine model-based RL in a dec-POMDP. The paper does a good job of explaining prior work in related domains. The schematic diagram also depicts the setup in an efficient and standalone manner.\\n\\nStill, I have some qualms related to the experimental setup that arguably makes the contribution of the proposed imagination framework inconclusive.\\n\\n- In the digits game, the agents need to produce actions that represent the next observation of the other agent. The transition dynamics are defined in a way such that the next observation for an agent i is independent of the action taken at the current timestep. I find this formulation to be incoherent with the way MACI works. Specifically, \\n a) The AgentController that produces the action doesn't need to depend on the current observation since it has no effect on the action. \\n b) The WorldModel produces the next observations, next hidden states, and the rewards given the current observation, current action, and current hidden state. Similar to the above, the information about the current action is not needed to produce the next observation. Moreover, the rewards, in this case, are only tied to the action. So it would make sense to produce it along with the action in the AgentController with a recurrent network.\\nOverall I believe this game is not aligned with the objectives of MACI, although I would love to have the authors clarify this.\\n\\n- There is no information about the objective functions used for optimization or any detail about the learning process without which it makes it hard to reproduce.\\n\\n- The choice of baselines doesn't seem to be appropriate for the task. Since all the baseline methods used do not use explicit communication in their original forms, the comparison thus becomes unfair. I would like the authors to reference if the baselines were modified in a way to accommodate this. This is important specifically in the two tasks chosen since I believe just adding communication should yield sufficient improvement.\\n\\n- The current approach is only applicable for a two-agent cooperative game narrowing down the scalability of the method. I believe the approach has the potential to extend to multiple agents either by having a confluence of messages or explicit grouping of agents. \\n\\n- An important missing ablation experiment is comparing comm+world model with only world model. This is crucial since it will determine whether the performance gain is due to the abstract planning or the communication.\\n\\n- The overall compute required is more than running a real-time experiment since the planning uses K-step rollouts. Some ablation of the choice of K would be interesting to look at especially in terms of wall time.\", \"typo\": \"Fig 6-A title\", \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"The claim is not well-supported.\", \"review\": \"Summary:\\n\\n \\nThis paper proposes to combine model-based and multi-agent reinforcement learning. The authors follow the typical recurrent neural world models setting to generate imagined rollouts for decision-time planning. To tackle the non-stationarity of a multi-agent environment, they build end-to-end differentiable communication channels between agents within a pre-defined neighborhood. The communication message is defined as abstract information encoded from the imagined rollout. Agents then make decisions based on the message they received and the output of recurrent neural world models. Empirical studies are performed to show the superiority of proposed methods over SOTA model-free MARL approaches. Results are shown in two simple environments, which are designed to require communication between agents to solve the task.\\n\\n\\n##########################################################################\", \"pros\": [\"The motivation of doing model-based MARL is very clear and challenging.\", \"Overall, the paper is well written.\", \"The ablation study on the roles of world models and communication channels is interesting.\", \"##########################################################################\"], \"cons\": \"- Although the paper claims as a combination of model-based and multi-agent RL, my major concern is that the proposed model still deals with these two problems separately. In particular, the world model doesn't consider the dynamics of other agents, thus being an independent model only. The paper proposed to tackle the multi-agent part of the problem by building an explicit communication channel, which lacks enough novelty.\\n\\n- I'm also concerned about the lack of rigorous experimentation to support the paper's claim. \\nThe two proposed environments are extremely tailored for algorithms with explicit communication channels and are limited in the number of agents. \\n\\t- For the digit game, the non-stationarity is not quite clear when there are only two agents. I'd like to see what would happen if the agent number in the digit game increases.\\n\\t- For the invisible spread, the ablation study shows that the role of world models is not important. I'd like to see the performance of other baseline algorithms that use explicit communication channels, which is not compared and seems to work well as the paper reported. If so, I don't see why this experiment supports the claim of combining model-based and multi-agent RL.\\n\\n##########################################################################\\n\\nPost rebuttal\\n\\nThe author's response does not address my primary concern and I'd like to keep my original score.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"A neat idea that requires further investigation\", \"review\": \"Thank you very much for sharing these cool ideas. I enjoyed the clear writing and excellent related work sections, and I genuinely believe this paper presents interesting concepts that warrant further investigation. Unfortunately, in its current state, this manuscript is not ready for to be shared with the wider community at ICLR.\\n\\nI will leave here a few suggestions for improvement and ideas on how to strengthen your argument. I sincerely hope you will find these useful as you continue your research on this topic.\\n\\nThe manuscript describes Multi-Agent Communication through Imagination (MACI). MACI is an imagination-inspired communication protocol that allows two sub-modules to exchange information about their non-overlapping observations.\\n\\nThe manuscript is well written and easy to follow, and the authors properly place their contributions in the context of existing ideas.\\n\\nWhile the experiments presented are clear and the results are encouraging, I think the experimental section could benefit from additional experiments, here is why:\\n\\n- The tasks presented here are extremely simple. I understand the need of didactic environments, but in a purely methods paper, the reader is left to wonder if this method scales to more complex environments, if it can work with more than two agents, and if it can handle non-cooperative settings. This is especially acute here, given that Fig. 6 suggests MACI only helps in 1 out of 2 environments, as the performance gains in Invisible Spread are obviously attributable to partial observability in the baselines.\\n\\n- The tasks presented are purely cooperative, and the communication system is differentiable. This means that by setting WorldModel, Encoder and Aggregator to the identity function, one would recover exactly a single-agent architecture that has access to the combined observations and operates in the product of the actions spaces. The only difference might lie in how the the system is supervised (it is unclear from the manuscript how WorldModel is trained). This is similar to what is presented in Fig. 6 in the ablation study, but would add including a shared world-model to produce an \\\"ideal\\\" agent. How does this perform? The baselines provided are at an obvious disadvantage as the environments are partially observable. This performance ceiling would guide the reader in understanding how much of the gap is recovered by MACI.\", \"additional_minor_remarks\": [\"I cannot find in the methods section how WorldModel is trained. Could this be made clearer in the text?\", \"How accurate is WorldModel? How important is this accuracy? What happens if we replace our learned WorldModel module with an ideal oracle?\", \"There is a bunch of work in modeling MARL (see, e.g. Hierarchical Policy Models [Zheng 2016], VAIN [Hoshen 2017], NRI [Kipf 2018] and RFM [Tacchetti 2018]). In particular RFM introduces on-board imagination models that influence the decisions of each agent. It might be good to add these to your references.\", \"Thank you again for sharing these cool ideas, I hope to see more of this soon and that you'll find some of this feedback useful.\", \"All the best.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"The presented algorithm is interesting, but the paper needs reframing and improved experimental method\", \"review\": \"This paper claims to present an algorithm which enables a population of (two) agents\\nto learn to communicate and coordinate to solve a task, and thus positions itself\\nin the field of multi-agent Deep RL. After a long but rather vague and unspecific introduction\\nand related work (see below), it describes the algorithm, then presents experiments where\\nthe introduced algorithm is compared with model-free MARL baselines.\\n\\nWhile the algorithm presented is interesting and has potentially some novelties compared to\\nthe state-of-the-art (e.g. differentiability of message passing in model-based MARL), it has\", \"also_a_number_of_weaknesses\": \"1) Globally, I had a lot of difficulty understanding clearly what are the aims of this paper:\\nWhat are the problems it aims to solve? What are the scientific questions adressed?\\nNeither the abstract nor the text provide sharp explanations of these aims.\\n\\n2) The paper uses very loaded but undefined vocabulary like \\\"imagination\\\", \\\"language\\\" and \\\"communication\\\".\\nWhile in general I think it can be sometimes useful to use concepts and terms from human cognitive sciences\\nto describe AI systems, in this particular case I found it very far fetched to speak of \\\"imagination\\\" and \\\"language\\\",\\neven \\\"communication\\\". It seems in practice authors might simply mean something like \\\"prediction of future states\\\"\\nwhen they use the term \\\"imagination\\\". \\\"Language\\\" and \\\"communication\\\" are also far-fetched because in cognitive\\nscience and linguistics it refers to systems that enable different individuals, with different world views, to\\ncommunicate an intent to each other. Here, the \\\"agents\\\" share the same world model, so they are not really\\ndifferent individuals with their own world representations, and their communication is rather like \\nmessage passing in GNNs, which is pretty far from \\\"language\\\" or \\\"human-like communication\\\".\\n\\n3) It is not even clear whether it is meaningful to call the presented system as \\\"multi-agent\\\", since in addition\\nto a centralized shared reward, there is also a shared world model. To me, the system looks rather like an RL\\nsystem that controls a multi-component body with local controllers that synchronize through message passing,\\nquite similary to graph neural network controllers (also including message passing) used for e.g. in Pathak et al. 2019.\\nA discussion of the similarities and differences with work such as Pathak et al. is needed.\\n\\n4) the authors are right to say that there is little research on model-based MARL, and cite one exception:\\nKrupnik et al. However, it is not justified why this closely related work is not included in the baselines,\\nor at least compared in discussion more thoroughly. Authors might also want to discuss another model-based MARL\", \"paper\": \"Zhang et al. 2020.\\n\\n5) A large part of the related work section is not relevant to this paper, in particular about Deep RL and model-based RL,\\nwhich are much broader topics than the one addressed in this paper\\n\\n6) The description of the method lacks sufficient technical details for reproducibility, in particular it lacks detailed\\npseudo-code (some refs are said to be in an appendix, but I did not find an appendix), and no links to code is provided.\\nFurthermore, there is no sufficient information on how hyperparameters selection for baselines was made.\\n\\n7) The two environments in the experiments are not sufficiently well motivated: why did you need to introduce them rather\\nthan reuse existing test environments? E.g. which particular problems did you want to address that was not possible with\\nexisting environments ?\\n\\n8) Since the claimed topic of the paper is about the emergence of a \\\"communication system\\\", one would expect a detailed\\nanalysis of the emergent communication code (currently only figure 5 gives a quite superficial qualitative analysis).\\n\\n9) The quantitative comparison of algorithms is not made using a sufficiently strong statistical method (only 5 seeds,\\nno tests such as Welch t-tests)\\n\\nFor these reasons, while the particular algorithms studied is in itself interesting, I think the paper would need a major\\nconceptual reframing and a better experimental methodology and justification before publication.\", \"references\": \"Pathak et al. (2019) Learning to Control Self-Assembling Morphologies: A Study of Generalization via Modularity\", \"https\": \"//arxiv.org/pdf/1902.05546.pdf\\n\\nZhang, K., Kakade, S. M., Ba\\u015far, T., & Yang, L. F. (2020). Model-based multi-agent rl in zero-sum markov games with near-optimal sample complexity. arXiv preprint arXiv:2007.07461.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
5rc0K0ezhqI | Unpacking Information Bottlenecks: Surrogate Objectives for Deep Learning | [
"Andreas Kirsch",
"Clare Lyle",
"Yarin Gal"
] | The Information Bottleneck principle offers both a mechanism to explain how deep neural networks train and generalize, as well as a regularized objective with which to train models. However, multiple competing objectives are proposed in the literature, and the information-theoretic quantities used in these objectives are difficult to compute for large deep neural networks, which in turn limits their use as a training objective. In this work, we review these quantities, compare and unify previously proposed objectives, which allows us to develop surrogate objectives more friendly to optimization without relying on cumbersome tools such as density estimation. We find that these surrogate objectives allow us to apply the information bottleneck to modern neural network architectures. We demonstrate our insights on MNIST, CIFAR-10 and ImageNette with modern DNN architectures (ResNets). | [
"deep learning",
"information bottleneck",
"information theory"
] | Reject | https://openreview.net/pdf?id=5rc0K0ezhqI | https://openreview.net/forum?id=5rc0K0ezhqI | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"nE6rNUU6hK",
"xsyY8RPBSzm",
"TqB5y1Wo2E0",
"v5wzj92lbNH",
"aGbXQeul5bc",
"FWihY1eehoI",
"6SbuUinc_Rd",
"QvNCkdJBX1x",
"enxXSvwfX5u",
"2qbcspcSnbS",
"JgTHb0uTE5E",
"OHiT4xE5n29",
"bZw86dHiMs0",
"ynGuKwN76Ll"
],
"note_type": [
"comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1611561792432,
1610040481807,
1605811271362,
1605733175924,
1605378437312,
1605378237639,
1605378156276,
1605377940506,
1605377191577,
1605376984973,
1604530768224,
1604033351161,
1604023016102,
1603915409775
],
"note_signatures": [
[
"~Andreas_Kirsch1"
],
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3552/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3552/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3552/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3552/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3552/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3552/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3552/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3552/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3552/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3552/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3552/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3552/AnonReviewer4"
]
],
"structured_content_str": [
"{\"title\": \"Quality of the decision notification\", \"comment\": \"Dear AC,\\n\\nthe quality of your decision notification is very disappointing and makes your decision intransparent.\\n\\nThe very well informed reviewer/\\\"more engaged reviewers\\\" (R3 I suppose) never engaged with our replies during the discussion period.\\n\\nThere is no insight into the discussion with R3 and reasons that made R4 lower their score to a \\\"weak accept\\\", and there has not been significant visible discussion on what \\\"more empirical results\\\" would need to be provided.\\n\\nYou have also not engaged with our comments in any way.\\n\\nI would assume that it is the role of the AC to summarize the discussion and make the decision appear reasonable. This has not happened.\\n\\nI hope that the quality of your engagement and decision notification for this paper is a one-off, and you have performed your function better overall. If not, I hope you will be able to improve on this in the future.\\n\\nIt is rather frustrating to spend a lot of time reviewing other papers and writing lengthy reviews to then see an AC behave so.\\n\\nBest wishes,\\\\\\n Andreas\"}",
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"We have a very well informed reviewer who strongly feels that this paper is insufficiently novel and significant further discussion on how the paper might be raised to a publishable level with more empirical results. I will have to side with the more engaged reviewers who feel that the paper should be rejected.\"}",
"{\"title\": \"Why does every comment need titled, this is so unnatural\", \"comment\": \"Thank you to the authors for their response as well as the updated draft. I'm satisfied enough to raise my score to a 7.\"}",
"{\"title\": \"An information-theoretic approach to VAEs\", \"comment\": \"Further to the question about the applicability of IB surrogates to VAEs, we can deduce an objective similar to the IB objective for VAEs using the insights from the paper: to obtain an ELBO, we use $H[X] + H[Z|X] = H[X|Z] + H[Z]$ and rearrange:\\n$$ H[X] = H[X|Z] + H[Z] - H[Z|X] \\\\overset{\\\\text{(1)}}{\\\\le} H_\\\\theta[X|Z] + H[Z] - H[Z|X] \\\\overset{(2)}{\\\\le} H_\\\\theta[X|Z] + H[Z].$$\", \"we_can_also_put_this_equation_into_words\": \"we want to find latent representations such that the reconstruction cross-entropy $H[X|Z]$ and the latent entropy $H[Z]$, which tell us about the length of encoding an input sample, become minimal and approach the true entropy as average optimal encoding length of the dataset distribution.\\n\\nThe first inequality (1) stems from introducing a cross-entropy approximation $H_\\\\theta[X|Z]$ for the conditional entropy $H[X|Z]$. The second inequality (2) stems from the injection of zero-entropy noise with a stochastic encoder. For a deterministic encoder, we would have equality. We also note that (1) is the DVIB objective for a VAE with $\\\\beta=1$, and (2) is the DIB objective for a VAE.\\n\\nFinally, we can use one of the surrogates to upper bound $H[Z]$. For optimization purposes, we can substitute the simplified $L_2$ activation regularizer $\\\\mathbb{E} ||Z||^2$ and minimize \\n$$\\\\min_\\\\theta H_\\\\theta[X|Z] + \\\\mathbb{E} ||Z||^2.$$\\nIt turns out that this objective is examined amongst others in the recently published Ghosh et al. (2019) as a *CV-VAE*, which uses a deterministic encoder and noise injection with constant variance. The paper derives this objective by noticing that the explicit parameterizations that are commonly used for VAEs are cumbersome, and the actual latent distribution does often not necessarily match the induced distribution (commonly a unit Gaussian) which causes sampling to generate out-of-distribution data. It fits a separate density estimator on $p(z)$ after training for sampling. The paper goes on to then examine other methods of regularization, but also provides experimental results on CV-VAE, which are in line with VAEs and WAEs. The derivation and motivation in the paper are different and makes no use of information-theoretic principles. Our short principled derivation above shows the power of using the insights from our paper for applications outside of supervised learning, and we are happy that it has been independently validated already. \\n\\n---\\nGhosh, Partha, Mehdi SM Sajjadi, Antonio Vergari, Michael Black, and Bernhard Sch\\u00f6lkopf. \\\"From variational to deterministic autoencoders.\\\" arXiv preprint arXiv:1903.12436 (2019).\"}",
"{\"title\": \"General Response\", \"comment\": \"We would like to thank all reviewers for their comments, and specifically Reviewer 3 for pointing us to additional related work (\\u201cThe Conditional Entropy Bottleneck\\u201d and the paper \\u201cCEB Improves Model Robustness\\u201d). We were only aware of an earlier version of \\u201cThe Conditional Entropy Bottleneck\\u201d (which we had included in our literature review already), and will update our discussion of related work to include these additional papers.\\n\\nHaving now reviewed the 2020 version of the CEB paper in more detail as well as \\u201cCEB Improves Model Robustness\\u201d, we are confident that our results are distinct from prior work. The similarities between our work and CEB go to the same extent as a number of works that provide lower bounds on the IB objective. Our method\\u2019s main distinguishing feature is that unlike the general formulations of many other objectives, our approach is specifically designed with *computational efficiency and simplicity* in mind. \\n\\nSpecifically, our objectives differ along 3 principal axes from variational lower bound approaches such as VIB and VCEB:\\n\\n1. we provide theoretical motivation for the addition of zero-entropy noise to the latent representation (Proposition 3); \\n2. we analyze the difference between optimizing $H[Y|X]$ and $H[Y|Z]$ which lead to two different variants of multi-sample Dropout that can be used in conjunction with our optimization objectives; and\\n3. our objectives, which unify IB and DIB objectives, don\\u2019t require variational approximation of the marginal $p(z)$ or conditional $p(z|y)$, making them straightforward to implement on top of existing architectures. \\n\\nOf course, we have updated the paper to reference the two papers and updated the literature review and contributions accordingly. We expect to upload the revision by Monday. We are happy that \\u201cCEB Improves Model Robustness\\u201d showed results for CIFAR-10 and ImageNet, which makes us optimistic that our results using our simple surrogate objective will also translate from ImageNette to ImageNet, which we could not validate due to computational constraints.\\n\\n---\\nFischer, Ian; Alemi, Alexander A. 2020. \\\"CEB Improves Model Robustness.\\\" Entropy 22, no. 10: 1081.\", \"https\": \"//arxiv.org/abs/2002.05379\"}",
"{\"title\": \"Response to Reviewer 3 (Part 2)\", \"comment\": \"> Their initial insight (Proposition 1) that recognizes that the information bottleneck objective I(XZ) - beta * I(YZ) can be rewritten as H(Y|Z) + beta' * I(XZ|Y), is exactly the insight given in CEB. This paper bounds the I(XZ|Y) term by assuming Z has zero-mean Gaussian noise (which can be chosen such that it is also zero-entropy noise). In contrast, the CEB paper gives a variational bound on the rewritten objective, and when optimizing this bound you sample from the encoder and use the samples to parameterize a distribution (where a gaussian is the simplest choice of distribution). It seems like this paper is producing a special case of CEB for Gaussian assumptions on that distribution family.\\n\\nWe present **three** distinct objectives in our paper. Of these three objectives, only one is directly comparable to the CEB objective (the objective based on $\\\\log Var[Z|Y]$); the other two are more directly related to information quantities that do not appear in the CEB. Our \\u2018CEB-like\\u2019 objective does indeed overlap with the VCEB objective in the case of a deterministic encoder (in our objective) and the use of an isotropic Gaussian distribution in the VCEB objective, as used in the implementation of CEBR. However, even this subset of our objective is not strictly a special case of VCEB due to point 2 above. \\n\\nFurther, the simplified version of VCEB implemented in \\u201cCEB Improves Model Robustness\\u201d still requires an explicit reverse decoder $b(z|y)$, while UIB does not (point 3 above). Our approach, therefore, also simplifies the method proposed in CEBR, making it more accessible to the broader deep learning community while attaining comparable results to CEBR. Moreover, we found the other objectives ($\\\\log Var[Z]$ and $\\\\mathbb{E}||Z||^2$) to be stabler under optimization.\\n\\nWe agree that the insights we use to derive our method bear resemblance to the derivations of a number of IB objectives, including but not limited to VCEB. The paper\\u2019s novelty stems from how it translates these insights into a tractable, easy-to-implement objective, and in the theoretical results used to do so. In particular, our paper provides theoretical motivation for the design choices used to obtain the empirical results in CEBR, where isotropic Gaussian noise of fixed variance is added to the latent mean embeddings. This choice is analogous to our architecture (though we also incorporate stochasticity in the latent encoding), and can be motivated as a corollary of our Proposition 3: namely, the use of fixed variance removes the possibility of a class of pathological optimization trajectories.\\n\\n> In addition, the empirical contributions are decidedly not novel. They claim to present \\\"the first successful evaluation of IB objectives on CIFAR-10 and ImageNette\\\", but prior work contains these evaluations: Fischer 2020 (linked above) contains CIFAR-10 results and Fischer and Alemi 2020 (https://arxiv.org/pdf/2002.05380.pdf) contains robustness results on CIFAR-10 and ImageNet, on larger ResNets than the experiments in this paper.\\n\\nThe results in \\u201cCEB Improves Model Robustness\\u201d for CIFAR-10 and ImageNet are impressive, and the CIFAR-10 robustness results are in line with what we report in this paper. We cannot provide ImageNet results as our own computational resources are not sufficient. There are some significant differences between the implementations in these papers, however. Our experiments also include stochastic encoders using Dropout and are easy to train with Adam. They also don\\u2019t require additional change to the training schedule.\\n\\n\\u201cCEB Improves Model Robustness\\u201d does not include stochastic encoders beyond injecting noise for CIFAR-10 and ImageNet and uses special training schedules that anneal the Lagrange multiplier to train the models. For the 2020 version of the CEB paper, we have not been able to determine what kind of encoder the experiments exactly use.\\n\\nWe hope we have been able to clarify our contributions and draw attention to the novelty of our work.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"We would like to thank the reviewer for drawing our attention towards the new revision of \\u201cThe Conditional Entropy Bottleneck\\u201d (CEB2020) and the paper \\u201cCEB Improves Model Robustness\\u201d (CEBR), which we were not aware of. Moreover, we also want to thank the reviewer for their important comment and for appreciating the writing, exposition and visualizations in our paper.\\n\\nHowever, we strongly disagree with the claim that our paper is mostly a duplicate of prior work which we detail below in response to the points made in the review.\\n\\nWe will upload a revised version of the paper by Monday to cite this work mentioned above and update the literature review and contributions accordingly. We apologize for claiming to be the first ones to run experiments on CIFAR-10 and higher-dimensional datasets as we were not aware of this recently published work and the new revision of CEB. \\n\\n### CEB2020 and CEBR\\n\\nWe thank the reviewer for bringing the papers CEB2020 and CEBR to our attention -- both of these papers along with our paper were submitted to Arxiv within a few weeks of each other, and we failed to catch the update to CEB2020 in our updated literature review. We were previously unaware of CEBR, whose publication in Entropy occurred one week prior to the ICLR submission deadline and so evaded our literature review. We will be happy to cite this work in our revisions, and to update our reference to CEB2020 to address the changes from the 2019 version of the paper.\\n\\nWe have cited the ICLR 2019 submission of CEB and included it in our comparison. It is great work, and we like the insights in regards to the optimal choice of the Lagrange multiplier, which we connect to the Entropy Distance Metric introduced by MacKay (2003) in section C.4 in our appendix. Independently, we had learnt to appreciate I-diagrams as providing principled intuitions, and we were happy to find them in CEB, too. We decided to provide extensive details and explanations in the appendix of UIB to ensure that future readers can learn to appreciate them.\\n\\n### Specific Replies\", \"we_want_to_offer_corrections_to_the_following_claims_about_our_paper_and_its_contributions\": \"> The objectives assume we add a single simple of zero-entropy noise to each sample of the output z of the stochastic encoder p(z|x), and then give estimators on an upper bound of the information bottleneck objective.\\n\\nWe are not limited to single samples and analyze multi-sample Dropout approaches in Section 3.2 in the paper: we provide an experiment comparing Decoder Cross-Entropy and Prediction Cross-Entropy (which represent the two different Dropout multi-sample approaches) in the appendix in G.3.3 as well as relevant plots in Figure G.11. We will update Section 3.2 to refer to the appendix explicitly.\\n\\n> I am not convinced that their theoretical contribution is novel - it seems to be a variant (or a specific case) of prior work on Conditional Entropy Bottleneck (CEB) given in Fischer 2020 (https://arxiv.org/pdf/2002.05379.pdf). \\n\\nHaving now reviewed the 2020 version of the CEB paper in more detail as well as \\u201cCEB Improves Model Robustness\\u201d, **we are confident that our results are distinct from the conditional entropy bottleneck**. Our paper presents a set of lower bounds on the IB objective that are all obtained by a) decomposing the IB objective into its constituent information quantities and b) computing tractable estimators on lower bounds these quantities. The contribution in a) is largely pedagogical and is the starting insight for a number of lower bounds on the IB objective, not limited to CEB. In b) the paper distinguishes itself from CEB in two key respects: first, we introduce a novel set of theoretical results which have intriguing implications independent of their use in formulating our objectives, and second, we present three distinct estimators of lower bounds, allowing us to compute not only a lower bound on the IB objective but also the DIB objective, which VCEB does not aim to approximate. In short, we present cheap and easy-to-implement estimators for a range of IB objectives (not limited to VCEB), thus making it easier for practitioners to incorporate these objectives into pre-existing architectures.\", \"we_cite_three_concrete_sources_of_novelty_for_our_work\": [\"we provide theoretical motivation for the addition of zero-entropy noise to the latent representation (Proposition 3);\", \"we analyze the difference between optimizing $H[Y|X]$ and $H[Y|Z]$ which lead to two different variants of multi-sample Dropout that can be used in conjunction with our optimization objectives; and\", \"our objectives, which unify IB and DIB objectives, don\\u2019t require variational approximation of the marginal $p(z)$ or conditional $p(z|y)$, making them straightforward to implement on top of existing architectures.\"]}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"We want to thank the reviewer for finding our paper well-written and for appreciating the breadth of the presented results. We are grateful for their comments and for recognizing our contributions so clearly.\\n\\nEspecially, for recognizing the importance of:\\n\\n* proposing three simple surrogate objectives that are more friendly to optimization;\\n* injecting noise to lower-bound entropies which avoids pathologies; and\\n* analyzing the two cross-entropy losses and the connection to multi-sample Dropout.\\n\\nOur empirical results for CIFAR10 and Imagenette show that we can obtain robustness and IB plane dynamics in line with the IB principle, even though our surrogate objectives are very simple. In particular, our simplest surrogate objective $\\\\mathbb{E} ||Z^2||$ (L2 activation regularization) together with noise injection can be trivially added to existing models. Moreover, it does not depend on $Y$ in any way. Thus, the benefits of injecting random noise in models and regularizing L2 activations will be of interest to practitioners and can be applied beyond supervised methods.\"}",
"{\"title\": \"Response to Reviewer 4\", \"comment\": \"We would like to thank the reviewer for highlighting the contributions of the paper, in particular the wide applicability of our objectives to a richer class of latent distributions than had been covered by previous work, the scalability and computational efficiency of our approach, and our discussion of related work. We will endeavour to clarify the points listed below to improve the paper in our revision, which we expect to upload by Monday.\\n\\n### Specific Replies\\n\\n> I struggled to disentangle the novel contributions of the authors from the work they were reviewing. The novel contributions to my knowledge are the first 3 pros above. On the other hand, optimizing decoder uncertainty is not, for example (e.g. it is done by VIB). The authors need to do a better job at highlighting their own contributions but also making clear what is not.\\t\\n\\nWe thank the reviewer for highlighting this source of confusion. With regard to the optimization of the Decoder Uncertainty, our principal contribution is to recognize the generality of the reparameterization trick and expand it to Dropout instead of using restrictive parameterized distributions as VIB and CEB do. We further provide additional analysis into the relationship between Prediction and Decoder Cross-Entropy, demonstrating that the two objectives only coincide when a single-sample estimator is used and drawing connections to orthogonal work which obtains similar findings (eg. Rank1 BNNs by Dusenberry et al (2020)).\\n\\n> I don\\u2019t think the authors make a very compelling explicit case for what advantages their approach has over VIB (the main alternative). I do believe there are advantages (see pros 1-3 above), but they are scattered throughout the paper and not always made explicit. I think the authors need a dedicated subsection addressing this. This section should also highlight the disadvantages (looser bound maybe?).\\t\\n\\nWe thank the reviewer for pointing this out and will add a section, comparing the approach to both VIB and CEB in more detail.\\n\\n> Relatedly, why not include direct comparisons to VIB in the experiments? The authors seem to imply that VIB wouldn\\u2019t scale to the datasets they tackle, since the experiments in the VIB paper involve pretrained embeddings and smaller models. But a) the VIB paper was written 4 years ago and hardware/software has improved since then b) the model sizes are the same order of magnitude.\\n\\nThis is an excellent idea, and one that we attempted to implement when running experiments for this submission. Unfortunately, we were unable to replicate the results listed in the VIB paper using the publicly available code for the MNIST dataset https://github.com/alexalemi/vib_demo and so refrained from including these results in our submission. We will include this comparison in the appendix in our revision. \\n\\n---\\nDusenberry, Michael W., Ghassen Jerfel, Yeming Wen, Yi-an Ma, Jasper Snoek, Katherine Heller, Balaji Lakshminarayanan, and Dustin Tran. \\\"Efficient and Scalable Bayesian Neural Nets with Rank-1 Factors.\\\" arXiv preprint arXiv:2005.07186 (2020).\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"We thank the reviewer for highlighting our rigorous analysis and our tractable objectives, easy-to-optimize objectives, which we hope will make IB objectives more accessible to the broader research community. We are also happy that the reviewer appreciates our usage of colors to make it easier to identify the different terms.\\n\\nWe would also like to thank the reviewer for suggesting additional recent applications of IB objectives. We will add these references to the introduction of the paper in our revision, which we expect to upload by Monday.\\n\\n### Specific Replies\\n\\n> I think the authors do a good job in theoretically motivating the particular surrogate objectives, but I would have liked to have seen some discussion as to why using a surrogate objective is sensible in the first place, versus say performing comparisons of deep IBs and VAEs. VAEs use maximum marginal likelihood as an objective. How does using the surrogate objective compare to maximum marginal likelihood? What are the implications of this for downstream application of the IB? Should IBs with surrogate objectives only be used for compression or also for prediction tasks like VAEs?\\n\\nThis is an intriguing point. Indeed, there is a strong connection between IB objectives and VAEs. For example, DVIB and $\\\\beta$-VAEs are related: the $\\\\beta$-VAE objective is essentially DVIB for a generative model (Appendix B of Alemi et al, 2017). More broadly, IB principles have been successfully applied within unsupervised learning as we relate in our introduction (Oord et al., 2018; Belghazi et al., 2018; Zhang et al., 2018; Burgess et al, 2018). We want to consider the effect of our surrogate objective on learnt representations in unsupervised settings in future work..\\n\\n> I would have also like to have seen empirically how the surrogate objectives can generalise across domains that are not images? Are these objectives robust in other applications such as EHR data or say the chemical/molecular domain?\\n\\nEvaluating the effect of IB objectives on a broader class of neural network architectures is an avenue we are eager to explore in future work. Our objectives can in principle be easily slotted into any DNN architecture, including architectures optimized for non-visual domains. Further, our objectives can be applied in Bayesian neural networks which use dropout for posterior approximation. We plan to investigate these extensions in future work.\\n\\n> I also think the generalisation plots are not the most intuitive to understand from first glance and require more parsing and explanation in the text.\\n\\nWe thank the reviewer for highlighting this and will include a clearer explanation in our revision. \\n\\n---\\nAaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018.\\n\\nMohamed Ishmael Belghazi, Aristide Baratin, Sai Rajeshwar, Sherjil Ozair, Yoshua Bengio, Aaron Courville, and Devon Hjelm. Mutual information neural estimation. In International Conference on Machine Learning, pages 531\\u2013540, 2018.\\n\\nYing Zhang, Tao Xiang, Timothy M Hospedales, and Huchuan Lu. Deep mutual learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4320\\u20134328, 2018.\\n\\nBurgess, C. P., Higgins, I., Pal, A., Matthey, L., Watters, N., Desjardins, G., & Lerchner, A. (1804). Understanding disentangling in \\u03b2-VAE. arXiv 2018. arXiv preprint arXiv:1804.03599.\"}",
"{\"title\": \"A very good paper on Information Bottleneck\", \"review\": \"This paper provides several surrogates for the Information Bottleneck (IB) and Deterministic Information Bottleneck (DIB) loss functions that are more friendly to optimization. For the decoder uncertainty part, the authors show that using Dropout and cross-entropy loss provides an unbiased estimator for the decoder cross-entropy which upperbounds the decoder uncertainty. For the regularization terms in IB/DIB, the authors inject noises to the latent features to lower-bound the conditional entropy of latent representations, and further proposes three types of surrogate objectives for the regularziation terms. Emprical results on CIFAR/ImageNette (a subset of ImageNet of 10 classes) show that the proposed surrogates yield similar behaviours in terms of adversarial robustness and information plane and the scalability of the proposed method.\", \"strengths_of_the_paper\": [\"As this paper claims, this is the first work that proposes some surrogate of IB loss functions that can be easily optimized and thus be scaled to large models and datasets (CIFAR/ImageNette). Results on both datasets show similar behavior (adversarial robustness, two-phase information plane) to IB loss based optimization.\", \"The injection of random noises into the latent representation is interesting and able to enforce lower-bound on the conditional entropy of latent representations, which further induces some surrogates that are optimization-friendly.\", \"This paper is well-written, fully-prepared and contains a large amount of results that are of wide interests of researchers working in this topic.\", \"I don't have specific criticisms for this paper.\"], \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"bounds on information bottleneck objectives\", \"review\": \"Summary:\\nThis paper makes a theoretical contribution of three \\\"surrogate objectives\\\" for the information bottleneck principle, followed by empirical results on MNIST, CIFAR-10 and ImageNette (a subset of 10 easily classified classes from ImageNet). The objectives assume we add a single simple of zero-entropy noise to each sample of the output z of the stochastic encoder p(z|x), and then give estimators on an upper bound of the information bottleneck objective.\", \"evaluation\": \"Overall, this is a fine paper - the introduction is especially well-written and I appreciated the inclusion of all the information plane images of training trajectories. However, the contributions are not sufficiently novel for acceptance at ICLR; this paper is mostly a duplicate of prior work in this area.\\n\\nI am not convinced that their theoretical contribution is novel - it seems to be a variant (or a specific case) of prior work on Conditional Entropy Bottleneck (CEB) given in Fischer 2020 (https://arxiv.org/pdf/2002.05379.pdf). Their initial insight (Proposition 1) that recognizes that the information bottleneck objective I(XZ) - beta * I(YZ) can be rewritten as H(Y|Z) + beta' * I(XZ|Y), is exactly the insight given in CEB. This paper bounds the I(XZ|Y) term by assuming Z has zero-mean Gaussian noise (which can be chosen such that it is also zero-entropy noise). In contrast, the CEB paper gives a variational bound on the rewritten objective, and when optimizing this bound you sample from the encoder and use the samples to parameterize a distribution (where a gaussian is the simplest choice of distribution). It seems like this paper is producing a special case of CEB for gaussian assumptions on that distribution family.\\n\\nIn addition, the empirical contributions are decidedly not novel. They claim to present \\\"the first successful evaluation of IB objectives on CIFAR-10 and ImageNette\\\", but prior work contains these evaluations: Fischer 2020 (linked above) contains CIFAR-10 results and Fischer and Alemi 2020 (https://arxiv.org/pdf/2002.05380.pdf) contains robustness results on CIFAR-10 and ImageNet, on larger ResNets than the experiments in this paper.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting paper but could use some clarifications\", \"review\": \"Overview: The authors provide a detailed analysis of the information bottleneck principle to explain how neural networks train and generalise. Specifically, since multiple competing IB objectives exist, the authors develop universal surrogate objectives that are easier to optimise and apply these to several neural network architectures for imaging tasks.\", \"quality_and_clarity\": \"The paper is clearly well written. I particularly like the use of colour to match corresponding terms in each of the objectives which makes it easy to pin-point which pieces correspond.\", \"significance\": \"The IB principle is a very useful and relevant concept for specifically optimising models to retain only the relevant information wrt a particular context or prediction task. It has been applied in several contexts eg deep generative models for identifying novel molecule structures (Wieser et al 2020) or deducing sufficient adjustment sets for causal inference (Parbhoo et al 2020) as well as DNNs in general (Tisbhy and Zaslavksy, 2015). Since it relies entirely on information theoretic quantities, it is widely applicable across several domains. Analysing these information theoretic objectives in order to make sense of these models is very important.\", \"pros\": \"1) The work presents a very rigorous analysis and discussion of multiple competing IB objectives and discusses the implications of each of these.\\n\\n2) The authors present tractable surrogate objectives that can make optimisation easier. Since these are defined entirely in terms of entropies as well, the work is applicable across various kinds of domains -- a key advantage of the classic IB too.\", \"cons\": \"1) I think the authors do a good job in theoretically motivating the particular surrogate objectives, but I would have liked to have seen some discussion as to why using a surrogate objective is sensible in the first place, versus say performing comparisons of deep IBs and VAEs. VAEs use maximum marginal likelihood as an objective. How does using the surrogate objective compare to maximum marginal likelihood? What are the implications of this for downstream application of the IB? Should IBs with surrogate objectives only be used for compression or also for prediction tasks like VAEs?\\n\\n\\n2) I would have also like to have seen empirically how the surrogate objectives can generalise across domains that are not images? Are these objectives robust in other applications such as EHR data or say the chemical/molecular domain?\\n\\n3) I also think the generalisation plots are not the most intuitive to understand from first glance and require more parsing and explanation in the text.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Need to better highlight novel contributions and compare/contrast with VIB\", \"review\": \"Summary of paper:\\nThe authors review the information bottleneck (IB) in the context of deep learning. They discuss the obstacles to applying the IB (and a deterministic variant, the DIB) to modern datasets, review approaches to doing so, and introduce their own scalable approach. Their approach introduces practical surrogate objectives for the information regularizer term, and uses dropout as the source of stochasticity. They take advantage of the scalability of their method to train a ResNet with (D)IB on MNIST, CIFAR-10, and ImageNette and study adversarial robustness and evolution in the information plane.\", \"pros\": \"1) The surrogate objectives the authors introduce allow the application of IB without restricting the latent distribution to take on a form with analytic entropy (i.e. gaussian), as is the case the deep variational IB (VIB).\\n2) The authors are the first, to my knowledge, to scale the DIB from tabular settings (where it was developed) to modern function approximation settings using a clever zero-entropy noise trick (although this did come at the cost of diverging from the deterministic solutions that would be optimal).\\n3) The authors are the first, to my knowledge, to use dropout as the source of stochasticity that IB requires. This has the advantage of allowing the authors to use nearly arbitrary DNN architectures with (D)IB, as opposed to inserting an explicitly stochastic (gaussian) layer.\\n4) The paper functions as a good review, independent of the authors\\u2019 contributions. A common complaint when reading ML papers is that they don\\u2019t discuss related work enough, so this paper was refreshing.\", \"cons\": \"1) I struggled to disentangle the novel contributions of the authors from the work they were reviewing. The novel contributions to my knowledge are the first 3 pros above. On the other hand, optimizing decoder uncertainty is not, for example (e.g. it is done by VIB). The authors need to do a better job at highlighting their own contributions but also making clear what is not.\\n2) I don\\u2019t think the authors make a very compelling explicit case for what advantages their approach has over VIB (the main alternative). I do believe there are advantages (see pros 1-3 above), but they are scattered throughout the paper and not always made explicit. I think the authors need a dedicated subsection addressing this. This section should also highlight the disadvantages (looser bound maybe?).\\n3) Relatedly, why not include direct comparisons to VIB in the experiments? The authors seem to imply that VIB wouldn\\u2019t scale to the datasets they tackle, since the experiments in the VIB paper involve pretrained embeddings and smaller models. But a) the VIB paper was written 4 years ago and hardware/software has improved since then b) the model sizes are the same order of magnitude.\", \"other_comments\": \"1) I thought the heavy use of color distracted more than it helped, though I appreciate the effort.\\n2) This paper (https://arxiv.org/abs/1712.09657) also attempted to scale DIB to non-tabular problems (although far from the scalability of DNNs). The authors also added noise, but in this case to the data rather than the latents. Different problem being solved, but possibly interesting connection for the authors.\\n\\nUPDATE\\n\\nFollowing the author's response and updated draft, I've raised my score from a 6 to a 7.\\n\\nUPDATE 2\\n\\nFollowing discussion among the reviewers and especially a summary of experimental results by Reviewer 3, I'm lowering score back to a 6.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
25OSRH9H0Gi | Putting Theory to Work: From Learning Bounds to Meta-Learning Algorithms | [
"Quentin Bouniot",
"Ievgen Redko",
"Romaric Audigier",
"Angélique Loesch",
"Amaury Habrard"
] | Most of existing deep learning models rely on excessive amounts of labeled training data in order to achieve state-of-the-art results, even though these data can be hard or costly to get in practice. One attractive alternative is to learn with little supervision, commonly referred to as few-shot learning (FSL), and, in particular, meta-learning that learns to learn with few data from related tasks. Despite the practical success of meta-learning, many of its algorithmic solutions proposed in the literature are based on sound intuitions, but lack a solid theoretical analysis of the expected performance on the test task. In this paper, we review the recent advances in meta-learning theory and show how they can be used in practice both to better understand the behavior of popular meta-learning algorithms and to improve their generalization capacity. This latter is achieved by integrating the theoretical assumptions ensuring efficient meta-learning in the form of regularization terms into several popular meta-learning algorithms for which we provide a large study of their behavior on classic few-shot classification benchmarks. To the best of our knowledge, this is the first contribution that puts the most recent learning bounds of meta-learning theory into practice for the popular task of few-shot classification. | [
"meta-learning",
"few-shot learning"
] | Reject | https://openreview.net/pdf?id=25OSRH9H0Gi | https://openreview.net/forum?id=25OSRH9H0Gi | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"YallLuFr3Ua",
"Ekyj5MyjaFL",
"HgK2PPdTfq",
"oMrAu15po5",
"CCIRrRq-B4",
"v8WK3lobTSL",
"J28pXiIoFu6",
"Mk2y1t9J9ld",
"w8T96h1n7jE",
"rFEvcC0xEkH",
"xJ6wWTQcOB",
"DQYVztNHIyL"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040409367,
1606216683180,
1606215688377,
1605385905220,
1605385610674,
1605385391014,
1605384994281,
1605384214523,
1603872275090,
1603857042067,
1603678512741,
1603678110109
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3550/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3550/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3550/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3550/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3550/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3550/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3550/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3550/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3550/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3550/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3550/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This paper is a systematic study of how assumptions that are present recent theoretical meta-learning bounds are satisfied in practical methods, and whether promoting these assumptions (by adding appropriate regularization terms) can improve performance of existing methods. The authors review common themes in theoretical frameworks for a meta learning setting that involves a feature learning step, based on which linear predictors for a variety of tasks are trained. Statistical guarantees for such a framework (that is, statistical guarantees for the performance of trained on an additional target task) are based on the assumption that the set of weight vectors of the linear predictors span the space (ie exhibit variety) and that the training tasks all enjoy a similar margin separability (that is, that the representation is not significantly better suited for some of the tasks than others).\\n\\nThe current submission, cleanly reviews the existing literature, distills out these two properties and then proposes a regularization framework (that could be added to various meta-learning algorithms) to promote these properties in the learned feature representation. \\n\\nFinally, the authors experimentally evaluate to what degree the properties are already observed by some meta learning methods, and whether the proposed additions will improve performance. It is established that adding the regularization terms improves performance on most tasks. The authors thus argue that incorporating insights obtained form recent theoretical frameworks of analysis, can lead to improved performance in practice. Naturally, the purpose of the presented results is not to establish a new state of the art on a set of benchmark tasks, but to systematically study and compare the effect of adding regularization terms that will promote the properties that are desirable for a feature representation based on statistical bounds.\\n\\nI would argue that the research community should support this type of studies. The work is well presented and conducted. Most importantly, the study has a clear and general message, that will be valuable for researchers and practitioners working in on meta-learning. \\n\\nHowever, the reviewers did not recommend publishing this type of study for ICLR. The authors are encouraged to resubmit their work to a different venue.\"}",
"{\"title\": \"Latest revision\", \"comment\": [\"As a follow-up on our previous comments:\", \"We adjusted the objective function to introduce hyperparameters to weight the regularization terms and we added additional experiments highlighting the results obtained when tuning them in the Supplementary Materials (Table 9 and Table 10).\", \"We thank the reviewer for its last question as it allowed us to come up with an example that answers it and better justifies our contribution. In short, the solution to the original problem with $W^*$ and $\\\\phi^*$ may be forced to lie outside the unregularized argmin set through our regularization. We provide an example for it in the beginning of Section 3.2 and illustrate it in Figure 1.\"]}",
"{\"title\": \"Latest Revision: Synthetic example and more recent baseline\", \"comment\": \"1. \\\"The proposed regularization is not novel and is similar to weight decay and spectral normalization.\\\" It is important to think about the regularization terms as a whole, and to not take the terms separately because satisfying both assumptions is crucial and only one of them is not enough to ensure efficient few-shot learning. In Table 5 of the Supplementary Materials, we showed that applying only the $L_2$ penalty on the linear predictors, as would be done with weight decay, is not effective on its own.\\n2. \\\"The improvement is not significant and more competitors should be considered.\\\" We added a more recent baseline, Meta-Curvature [1], in Table 11 and Figure 4 in the Supplementary Materials.\\n3. \\\"The assumptions are based on the optimal predictors and thus cannot be ensured.\\\" We provide an example in Section 3.2, with the associated code in the Supplementary Material, for which the optimal predictors in the optimal representation space do not satisfy Assumption 1, while learning with the constraint on the ratio of singular values leads to a different data representation and a set of linear predictors that satisfy it. This allows to justify our regularization more rigorously and to show that in practice it may lead to significantly different empirical solutions. \\n\\n[1]: E. Park, J. Oliva. Meta-Curvature, NeurIPS 2019\"}",
"{\"title\": \"Clarification of several common remarks\", \"comment\": \"We thank the reviewers for their comments. Before answering each reviewer individually, we would like to clarify several common remarks made by the reviewers.\\n\\n1. \\\"The proposed regularization is not novel and is similar to weight decay and spectral normalization\\\". We would like to insist on the fact that these two regularizations are *fundamentally* different from ours. For the former, we note that weight decay regularizes the whole weight matrix learned by the neural network to improve generalization and avoid overfitting though sparsity, while our goal is to keep the classification margin unchanged during the training to avoid over-/under-specialization to some source tasks. Similarly, spectral normalization proposed by Miyato et al. ICLR 2018 to satisfy the Lipschitz constraint in GANs through dividing $W^*$ values by $\\\\sigma_\\\\text{max}(W^*)$ serves a completely different purpose and does not impact the considered ratio as explained in the revised version of our manuscript (see Section 3.3). \\n\\n2. \\\"The improvement is not significant and more competitors should be considered\\\". We investigate whether few-shot learning theory is supported by empirical observations. We do not seek to improve the classification accuracy through regularization (we do not even tune hyper-parameters!): this is merely a by-product of showing that few-shot learning theory indeed seems to work in practice! The difference in terms of performances are *statistically significant* in all cases when there is a perceivable difference (not necessarily improvement) in terms of the obtained results. As for the number of baselines, we combined several different approaches to few-shot classification studied separately in Cao et al. ICLR'20 (ProtoNet only, same benchmarks), Raghu et al. ICLR'20 (MAML only, miniImageNet + Omniglot) thus providing a more extensive evaluation compared to previous works studying the inner workings of meta-learning published at last year's ICLR. \\n\\n3. \\\"The assumptions are based on the optimal predictors and thus cannot be ensured.\\\" As many other theoretical results in the statistical learning literature, the assumption given in Eq. 3 is stated for the true optimal matrix of the linear predictors $W^*$ which is unknown in practice. However, one can assume that the meta-learning process leads to a consistent estimation of $W^*$ and expect the output matrix $\\\\hat{W}$ to be close to the latter and thus, to satisfy the same assumptions too. We added this explanation in Section 3.2.\"}",
"{\"title\": \"Our goal is not to propose a new regularization that outperforms the state of the art\", \"comment\": \"We thank the reviewer for the detailed and helpful review. We want to make it clear that our goal is not to propose novel meta-learning algorithm with a new regularization that outperforms the state of the art meta-learning methods but rather to find out whether recent theoretical insights from few-shot learning theory are useful in practice. We will adjust the narrative of our paper accordingly to reflect this.\\n\\n- As explained in Section 3.3 of our paper, normalizing the norm of the linear predictors is different from weight decay as we only regularize/normalize the norm of the linear predictors and not the weights of the whole model. Also, the overall purpose of this in our case is completely different: weight decay is used to improve generalization though sparsity in order to avoid overfitting, while our goal is to keep the classification margin unchanged through the learning process to avoid over/under specialization to some source tasks seen during training. Finally, as the few-shot learning theory suggests, satysfying *both* assumptions is crucial as and only one of them is not enough to ensure efficient few-shot learning. This agrees with the experimental results provided in Table 5 of the Supplementary materials highlighting this latter finding. We added this explanation in Section 3.3.\\n\\n- While the works on few-shot learning theory consider linear predictors, we agree with the reviewer that in practice the used predictors can be much more complicated and/or different than a linear layer. However, we do not fully understand why the proposed regularization might become trivial with a more complicated model and would appreciate more comments on this. Verifying the assumptions for more complicated models might be more difficult because it would require upstream work to understand which part of the model acts as predictors (we have already done it for ProtoNet that does not use a linear layer for classification) and how to compute and track the desired quantities. We added this explanation in Remark 2.\\n\\n- We agree that it may be interesting to add more recent and complicated few-shot learning methods to our comparison and we are working on it for the revised version of the manuscript and we will provide additional comparisons as soon as possible. We note, however, that considering established efficient methods appears to be more appropriate as most of the more complicated methods follow similar methodology (e.g. Meta-Curvature [1], MetaOptNet [2]). We added this explanation in Remark 2.\\n\\n- Computing the SVD is entirely differentiable and naturally supported in auto-differentiation frameworks such as Pytorch and Tensorflow and backpropagation through SVD was already used in [3]. \\n\\n[1]: E. Park, J. Oliva. Meta-Curvature, NeurIPS 2019\\n[2]: K. Lee, S. Maji, A. Ravichandran, S. Soatto. Meta-Learning with Differentiable Convex Optimization, CVPR 2019\\n[3]: X. Chen, S. Wang, B. Fu, M. Long, J. Wang. Catastrophic Forgetting Meets Negative Transfer:Batch Spectral Shrinkage for Safe Transfer Learning, NeurIPS 2019\"}",
"{\"title\": \"Our goal is not to propose a new regularization that outperforms the state of the art\", \"comment\": \"We thank the reviewer for the review.\\n\\n- Learning with deep neural networks that optimize a non-convex objective function often leads to differences between the reported and the reproduced results even when using the code provided by authors. Thus, it is a common practice to compare the obtained results with the reproduced results rather than the reported ones [1,2,3]. Beyond that, we are interested in the relative difference between our reproduced results and those obtained with regularization that follows the current theory of few-shot learning. These differences are observed consistently in the experiments repeated 4 times with 4 different seeds and they are *statiscally significant* when marked with \\\"\\\\*\\\" in Table 1, which means that the results are outside of the standard deviation observed. \\n\\n- Indeed, for better results, it is natural to introduce hyperparameters to weight the regularization terms. However, our goal is not to propose novel meta-learning algorithm with a new regularization that outperforms the state of the art meta-learning methods but rather to find out whether recent theoretical insights from few-shot learning theory are useful in practice. We will make sure to adjust the narrative accordingly and we will add additional experiments highlighting the results obtained with hyperparameter tuning. \\n\\n- As many other theoretical results in the statistical learning literature, the assumption given in Eq. 3 is stated for the true optimal matrix of the linear predictors $W^*$ which is unknown in practice. However, one can assume that the meta-learning process leads to a consistent estimation of $W^*$ and expect the output matrix $\\\\widehat{W}$ to be close to the latter and thus, to satisfy the same assumptions too. We also agree with the reviewer regarding the employed terminology as our primary goal was indeed to verify whether the theoretical assumptions hold and to find practical ways to \\\"ensure\\\" them when it is not the case. We added this explanation in Section 3.2.\\n\\n- The question asked by the reviewer is very interesting as indeed, if the optimal predictors are not diverse enough, then we should not expect that the source data will be helpful in reducing the excess risk on the previously unseen target task. In practice, however, we deal with empirical estimators that, contrary to the theoretical setup, may be forced to lie outside the true unregularized argmin set through our regularization. We hypothesize that it may be possible to sacrifice some accuracy by learning less efficiently on the source tasks to have a better performance on the target task. We also agree that it would be interesting to find an illustrative synthetic experiment for this and we are currently working on providing it in addition to new illustrative figures that we have already included in the revised version of the manuscript in Figure 1. \\n\\n[1]: HOW TO TRAIN YOUR MAML, ICLR 2019\\n[2]: A CLOSER LOOK AT FEW-SHOT CLASSIFICATION, ICLR 2019\\n[3]: RAPID LEARNING OR FEATURE REUSE? TOWARDS UNDERSTANDING THE EFFECTIVENESS OF MAML, ICLR 2020\"}",
"{\"title\": \"Our goal is to study the theoretical vs real behavior of meta-learning algorithms\", \"comment\": \"We thank the reviewer for the review. We want to make it clear that our goal is not to propose novel meta-learning algorithm with a new regularization that outperforms the state of the art meta-learning methods but rather to find out whether recent theoretical insights from few-shot learning theory are useful in practice and coherent with the real-world behavior of several popular meta-learning algorithms. We will adjust the narrative of our paper accordingly.\\n\\n1. In the context of our work, one should understand few-shot learning as a theoretical setup considered in Du et al.'19 where we are given a set of source tasks, and we want to make the most of them to learn efficiently (in the sample complexity sense) a new target task with few labeled data. Note that the exact way of how this is done algorithmically (with or without the support set, with or without learning episodes) does not change the statistical learning challenge of it which is to learn a model that can generalize with little supervision. Traditional statistical learning theory tells us that the generalization in this case will be provably poor (not enough target data and impossible to rely on data coming from different probability distributions), while the theoretical works we built upon tell us that source data may contribute equally in improving the generalization of the learned model alongside the target data if the assumptions that we study are respected.\\nApart from that, we agree with the reviewer on another important point: few-shot learning (FSL) is not strictly equivalent to meta-learning, even though the latter is almost always evaluated on the former task. We added this explanation in Remark 1.\\n\\n2. Indeed, the assumption regarding the task distribution is crucial in the previous works on meta-learning and few-shot classification. One should think of the i.i.d assumption used in Maurer et al.'16 in the same sense as if it were related to the random vectors and not probability distributions: if it holds, then the distributions of all source and target tasks are independent and follow the same random distribution. This assumption is not realistic in practice as the source tasks in few-shot classification are often dependent as they usually belong to different draws (without replacement) from the same dataset. We added this explanation in Section 3.1.\\n\\n3. We agree with the reviewer on the fact that the theoretical setups of Maurer et al.'16 and Du et al.'20 do not exactly correspond to the MAML algorithm. As with any theory, its application in practice requires certain relaxations of the considered setup which correspond in our case to assuming that the algorithmic details of how the learning in a few-shot regime is achieved should not impact the general conditions that should be respected in order for it to succeed. We added this explanation in Remark 1.\\n\\n4. We ask the reviewer to kindly specify to which \\\"those\\\" values he/she is referring to in the last item in the review and we will include them (or point out to where one can find them in the supplementary material) consequently.\"}",
"{\"title\": \"Our regularization is fundamentally different, and the goal is to study the theoretical vs real behavior of meta-learning algorithms\", \"comment\": \"We thank the reviewer for the feedback. Before addressing the different concerns raised by the reviewer, we first want to insist that our goal is not to propose a novel regularization that outperforms the state of the art few-shot classification methods but rather to study whether current theoretical results leading to provably efficient few-shot classification agree with the real-world behaviour of several popular meta-learning algorithms. Note that we do not seek to improve the performance or to show that our regularization works better than other methods: it is used solely as a way of verifying whether theoretical assumptions are useful, to some extent, in practice. We will make sure to adjust the wording accordingly.\\n\\n1. a. As many other theoretical results in the statistical learning literature, the assumption given in Eq. 3 is stated for the true optimal matrix of the linear predictors $W^*$ which is unknown in practice. However, one can assume that the meta-learning process leads to a consistent estimation of $W^*$ and expect the output matrix $\\\\widehat{W}$ to be close to the latter and thus, to satisfy the same assumptions too. We added this explanation in Section 3.2.\\n\\n b. Our regularization is strictly different from [1] from the algebraic point of view as dividing the $\\\\widehat{W}$ values by $\\\\sigma_{max}$ as done in [1] does not affect the ratio between $\\\\sigma_{max}$ and $\\\\sigma_{min}$. This trivially follows from the fact that if $\\\\sigma_{min} \\\\neq \\\\sigma_{max}$ then $\\\\widehat{W} / \\\\sigma_{max} = Udiag(\\\\{1, \\\\dots, \\\\sigma_{min}/\\\\sigma_{max}\\\\})V$ and $1 \\\\neq \\\\sigma_{min}/\\\\sigma_{max}$ ! Also, the regularization in [1] is used in GANs to satisfy the Lipschitz constraint which has nothing to do with our goal of increasing the diversity of linear predictors. We added this explanation in Section 3.3.\\n\\n c. As explained in Section 3.3 of our paper, normalizing the norm of the linear predictors is different from weight decay as we only regularize/normalize the norm of the linear predictors and not the weights of the whole model. Also, the overall purpose of this in our case is completely different: weight decay is used to improve generalization though sparsity in order to avoid overfitting, while our goal is to keep the classification margin unchanged through the learning process to avoid over/under specialization to some source tasks seen during the training. We added this explanation in Section 3.3.\\n\\n2. MAML++ [2] improves over vanilla MAML using *implementation* tricks such as learning rate annealing and per-step batch normalization. This has no strict theoretical justification and bears no connection to our proposal. We choose to first verify the theoretical insights on the most established methods from the field before embarking on adding the most recent contributions. \\n\\n[1] Spectral Normalization for Generative Adversarial Networks, ICLR 2018 \\n[2] HOW TO TRAIN YOUR MAML, ICLR 2019\"}",
"{\"title\": \"The idea of bridging theory and practice is good, but the proposed regularization is not novel.\", \"review\": \"##########################################################################\", \"summary\": \"The paper reviews common assumptions made by recent theoretical analysis of meta-learning and applies them to meta-learning methods as regularization. Results show that these regularization terms improve over vanilla meta-learning.\\n\\n##########################################################################\", \"reasons_for_score\": \"Overall, I vote for reject. The main idea of applying theory to practice is reasonable, but the regularization methods proposed are mainly known. Regularizing the singular value is similar to the spectral normalization proposed in [1]. The Frobenius norm regularization is similar to the commonly used weight decay.\\n\\n##########################################################################\\n1.\\tAssumption 1 in Du et al. states that the ground truth weight should cover all directions evenly. It cannot be ensured when the tasks are fixed. The proposed regularization penalizes the condition number of the weight matrix during training, which is more similar to the spectral normalization proposed in [1]. As to regularizing the Frobenius norm, there exist a line of literature showing weight decay works for general settings apart from meta-learning. Thus, I think the regularization proposed in this paper is known.\\n2.\\tThe experimental results indeed improve over vanilla meta-learning. However, as shown in [2], even by with some simple tricks, meta-learning can be more stable and achieves better results. This casts doubt on the value of the proposed method.\\n\\n[1] Spectral Normalization for Generative Adversarial Networks, ICLR 2018\\n[2] HOW TO TRAIN YOUR MAML, ICLR 2019\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"A theory inspired method for meta-learning\", \"review\": \"The main motivation of this paper is based on the theoretical results of meta-learning. To ensure the assumptions of the theories, the authors propose a novel regularizer, which improves the generalization ability of the model. Some results on few-shot learning benchmarks show the proposed method improves w.r.t. those baselines.\", \"here_are_the_main_concerns_of_this_paper\": \"1. The proposed method in this paper is based on the meta-learning theory as stated in Section 2. However, the theoretical setting here is not fully consistent with the few-shot learning setting. For example, there is no validation set in Eq. 1. The authors should make more discussions here to show will these differences influence the final results.\\n2. One main theoretical assumption in meta-learning theory is the task distribution. Could the authors make this notion clear? Should we do empirical results on those tasks with different kinds of task distributions?\\n3. The meta-learning loss in Eq. 4 is a bit different from the popular meta-learning objective. For example, in MAML, we do not optimize the classifier W till convergence while only a limited number of gradient steps are used. \\n4. The authors should list those baseline values in Table 1, which are still important for reference.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Improving practical performance of meta-learning, with inspiration from theoretical results\", \"review\": [\"To improve the practical performance of meta-learning algorithms, this paper proposes two regularization terms that are motivated by two common assumptions in some recent theoretical work on meta-learning, namely (1) the optimal (linear) predictors cover the embedding space evenly, and (2) the norms of the optimal predictors remain bounded as the number of tasks grow. Numerical experiments show that the proposed regularization terms help achieve better performance of meta-learning in some tasks.\", \"This work serves as a nice attempt to instruct the practice of meta-learning with theoretical insights. Below are some of my concerns.\", \"In some experimental results, the improvement due to the proposed regularization seems to be at the same level of the standard deviation, as well as the difference between the reproduced results of existing meta-learning algorithms and those reported in earlier papers. This casts doubt on the true efficacy of the proposed methods.\", \"For the loss function in Eq. (4), it is more reasonable and natural to introduce two weighting parameters (as tunable hyperparameters) for the proposed regularization terms.\", \"The authors often talk about \\\"enforcing/ensuring the assumptions\\\". However, from my understanding, whether the assumptions (on the optimal linear predictors, or \\\"ground-truth\\\" predictors) hold or not depends on the learning problem itself, NOT on the algorithms. Therefore, there is no way we can enforce/ensure these assumptions. I would prefer using the phrase \\\"respecting the assumptions\\\" (used by the authors on Page 8); this seems more accurate and reasonable.\", \"Following the previous point, I'm curious about one question: if the learning problem actually doesn't satisfy the two assumptions, then is it still helpful to add the proposed regularization terms to the loss function? (I'm not sure, but my guess is no; indeed, it might even hurt.) To solve puzzles like this, I would encourage the authors to conduct some synthetic experiments, where they can design the data generating process (e.g. they can control whether the true linear predictors satisfy the assumptions or not). Since this work is a connection between theory and practice, I believe that experiments with synthetic data can help explain things more clearly and make the claims more convincing.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Putting Theory to Work: From Learning Bounds to Meta-Learning Algorithms\", \"review\": \"Summary:\\nIn this paper, the authors aim at bridging the gap between the practice and theory in meta-learning approaches. Specifically, they propose two regularization terms to 1) capture the diversity of the tasks and 2) control the norm of the prediction layer, thereby satisfying the assumptions in meta-learning theory.\", \"strength\": [\"The motivation of this paper is interesting, before proposing the methodology. These theoretical assumptions have not been paid enough attention before.\", \"The paper is well-organized and clearly written.\", \"The experimental setting is designed in a good manner and the results are promising.\"], \"weakness\": [\"I am skeptical of the novelty of the second regularize in Eq.(4). According to Section 3.2, it is equivalent to ||w||_{2}=O(1). So what is its difference to a simple l2 weight decay?\", \"According to Section 2, the outer-level parameters are restricted as a linear layer. Is this means the proposed regularizes would become trivial while applied on top of a more complicated model, e.g., LEO[1]?\", \"Too few competitors. It would be better to add some comparisons with recent methods.\", \"The details to calculate the subgradients of the singular values, which is quite complicated, are missing. Especially seeing that there is no guarantee that an auto-differentiation tool will do that correct.\"], \"ref\": \"[1] Andrei A. Rusu, Dushyant Rao, Jakub Sygnowski, Oriol Vinyals, Razvan Pascanu, Simon Osindero, Raia Hadsell: Meta-Learning with Latent Embedding Optimization. ICLR 2019\\n\\nAbove all, since the contribution and the technical details to calculate the subgradients are not clear to me, I have to currently recommend a weak reject.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
9y4qOAIfA9r | Does injecting linguistic structure into language models lead to better alignment with brain recordings? | [
"Mostafa Abdou",
"Ana Valeria González",
"Mariya K Toneva",
"Daniel Hershcovich",
"Anders Søgaard"
] | Neuroscientists evaluate deep neural networks for natural language processing as possible candidate models for how language is processed in the brain. These models are often trained without explicit linguistic supervision, but have been shown to learn some linguistic structure in the absence of such supervision (Manning et. al, 2020), potentially questioning the relevance of symbolic linguistic theories in modeling such cognitive processes (Warstadt & Bowman, 2020). We evaluate across two fMRI datasets whether language models align better with brain recordings, if their attention is biased by annotations from syntactic or semantic formalisms. Using structure from dependency or minimal recursion semantic annotations, we find alignments improve significantly for one of the datasets. For another dataset, we see more mixed results. We present an extensive analysis of these results. Our proposed approach enables the evaluation of more targeted hypotheses about the composition of meaning in the brain, expanding the range of possible scientific inferences a neuroscientist could make, and opens up new opportunities for cross-pollination between computational neuroscience and linguistics.
| [
"neurolinguistics",
"natural language processing",
"computational neuroscience"
] | Reject | https://openreview.net/pdf?id=9y4qOAIfA9r | https://openreview.net/forum?id=9y4qOAIfA9r | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"68BYXNFHTR7",
"4l7crwAbioJ",
"USxBVgNd_34",
"NlvKs-55IFf",
"HRU-1QJayQE",
"05W0NX2okx0",
"Xv39qNnz0ct",
"fEDdakEBqGD",
"EyBpYOU6h9W",
"ob8UXL5rO4h",
"qwTYI6gBkGF",
"Dp9vC7JXi1Y",
"ErZ-t3zCjqu",
"IKPgvd-G2u",
"XGYA484ZAxt"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040406052,
1606220507548,
1605695241550,
1605633868229,
1605619496950,
1605609635751,
1605527801050,
1605483551647,
1605431302242,
1605431109524,
1605187839574,
1604004916135,
1603939391984,
1603926603106,
1603847571148
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3548/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3548/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3548/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3548/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3548/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3548/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3548/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3548/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3548/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3548/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3548/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3548/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3548/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3548/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This paper explores the effect on decoding accuracy (predicting hidden representations from fMRI datasets) from fine tuning models by injecting structural bias. This paper specifically focuses the attention of BERT on syntactic features of the text, which (for one dataset) appears to improve the decoding performance. The paper's motivation is strong, and complex concepts are communicated clearly.\\n\\nThe review period was very productive. There were some questions about analyses, and the validity of the statistical tests, but through some very thorough back and forth with the reviewers, this seems to have been resolved. There is a good amount of analysis done on the resulting language models to try and determine the impact of finetuning or attention on the models. However, the results on the fMRI two datasets appear to be very different, and it's unclear why (and isn't clearly related back to the extensive language model analyses). We would have liked to have seen a more thorough analysis of the stark difference in performance, and some convincing explanations for the difference based on the analyses. \\n\\n\\nP.s. A minor point, but the Wehbe paper uses Chapter 9 of Harry potter, not chapter 2.\"}",
"{\"title\": \"Summary of revision\", \"comment\": [\"Dear reviewers, we appreciate your feedback. A lot of minor changes have been added based on your suggestions; we hope we have addressed all of your concerns. To clarify what has been changed, we have compiled the following overview.\", \"Based on reviewers\\u2019 suggestions, we elaborate on our discussion in section 5, including an analysis of the impact of structural bias on how models encode semantic information. We find that the structurally biased models improve in their ability to make a range of semantic distinctions. Details of how this analysis is conducted and further discussion are added to Appendix E.\", \"We correct the method by which perplexity is calculated for the analysis of the effect of domain, and include the results for both the domain-fintuned baselines and the structurally biased models in Appendix D. Our main conclusions regarding the effect of domain remain largely unchanged.\", \"We also correct part of the methodology for our statistical testing as suggested by reviewer #3, and make some adjustments to our discussion of the Pereira 2018 results based on this. The Wehbe 2014 results are not affected by this, due to being highly significant. We also include the details of our statistical testing procedure and results, including testing for strength of generalization across subjects in Appendix C.\", \"We include the rank-based metrics from Gauthier and Levy, 2018 which gives the rank of a ground-truth sentence representation in the list of nearest neighbors (computed via cosine similarity) of a predicted sentence representation (Appendix B). Alongside the GA models\\u2019 improved performance on the targeted syntactic evaluation and the semantic tagging tasks, this provides ample evidence that the main evaluation metric is not \\u2018trivially gamed\\u2019 by the representations induced by the structurally biased models.\"]}",
"{\"title\": \"Response to reviewer #2\", \"comment\": [\"Dear reviewer #2, we thank you for your appreciation of our work and your helpful comments/suggestions, which we aim to address:\", \"Regarding the direction of prediction in the regression between the brain and model representations: in addition to comparing the regression performance during brain decoding, we have also evaluated all models on a range of syntactic probing tasks proposed by Marvin & Linzen (2019). From these evaluations, we observe that after attention-guided fine-tuning: a) two of the guided-attention models have a higher score than the pretrained baseline and the domain-finetuned baselines for most tasks and b) the ranking of the models corresponds to their ranking on the brain decoding task (DM > UD > UCCA). Taking both the brain decoding results and these syntactic probing results together, we argue that the guided-attention has altered the model representations in a beneficial way that is beyond just simplifying the representations in a task-irrelevant way. However, we agree with the reviewer that investigating the opposite direction of prediction (from the model representations to the brain representations) is also informative, and indeed this is a recently popular direction (Toneva and Wehbe, 2019; Schwartz et al. 2019, Schrimpf et al. 2020) that will make for excellent future application of our proposed method.\", \"We will clarify and consolidate our discussion sections to better highlight the conclusions.\", \"Regarding the data, it is all manually annotated by expert annotators.. We had cut the section short to save space, but will now include the additional information. Fine-tuning data size is indeed correspondent to decoding score (for Wehbe 2014) and even to performance on (most of) the subject-verb agreement tasks. We will add mention of this.\"]}",
"{\"title\": \"Another response re: statistical testing\", \"comment\": \"We apologize for the confusion arising from our compounded test - thanks a lot for spotting this!\\nBelow, we report corrected p-values from the signed rank test applied directly to the 384 scores for for the **GA UD**vs.**PRE** and the **GA UD**vs.**PRE** comparisons, per subject:\\n\\n| model/subject \\t| M2 \\t| M4 \\t| M7 \\t| M8 \\t| M9 \\t| M14 \\t| M15 \\t| P01 \\t|\\n|----------------------\\t|---------\\t|-------\\t|-------\\t|-------\\t|----------\\t|---------\\t|-------\\t|-------\\t|\\n| **GA UD** vs.**PRE** \\t| 5.62e-5 \\t| 0.331 \\t| 0.011 \\t| 0.045 \\t| 1.41e-08 \\t| 0.0076 \\t| 0.85 \\t| 0.065 \\t|\\n| **GA DM** vs.**PRE** \\t| 0.25 \\t| 0.25 \\t| 0.076 \\t| 0.038 \\t| 0.0001 \\t| 0.849 \\t| 0.56 \\t| 0.11 \\t|\\n\\nWhile we find that while **GA UD** is still significantly better for most subjects, the same is not true for **GA DM** (**GA UCCA**, meanwhile, is unsurprisingly, significantly worse for most subjects). For the sake of completeness, we apply this same procedure to Wehbe 2014, getting corrected p-values that are << 0.001 for all subjects across the three **GA** models .\\nWe will update both the p-values and procedure reported in the paper. We will also include the results for generalisation across subjects.\"}",
"{\"title\": \"Response re: statistical testing\", \"comment\": \"Thank you, this is all very helpful! I think I understand better. If you are running the signed rank test on the set of 3000 paired scores resulting from bootstrapping, I think that is an incorrect approach. As an example, suppose you only had three stimulus sentences, and the GA UD - PRE rank difference scores for these three stimulus sentences are 0.05, 0.04, -0.02 (modulo some multiplicative constant depending on how you're representing rank scores). In all likelihood, over 80% of your samples would have a mean difference score above 0, and the signed rank test would come out highly significant. Here is a bit of simple R code showing this:\\n\\n item_difference_scores <- c(0.05,0.04,-0.02)\\n wilcox.test(item_difference_scores) # clearly as far from significance as possible\\n samples <- sapply(1:3000,function(ignore) mean(sample(xx,3,replace=TRUE)))\\n mean(samples>0)\\n wilcox.test(samples) # comes out highly significant\", \"this_approach_is_not_valid\": \"it is effectively treating the 3000 samples as iid, whereas they are definitely not.\\n\\nNormally one would use the bootstrap to get to a p-value by generating a bootstrap distribution of the test statistic of interest under the null hypothesis that neither of the two models is appreciably better than the other. If you are getting a statistically significant result under a nonparametric test like signed rank, the bootstrap is probably not necessary.\\n\\n384 items is not such a small number of items. What results do you get when you apply the signed rank test directly to the set of 384 difference scores for each model pair?\\n\\nI do think you need to include by-subject analyses along the lines of what you have above. The big picture here is that you need to test your results for strength of evidence of generalization over both subjects and stimulus sentences.\"}",
"{\"title\": \"Response to follow-up questions\", \"comment\": \"Thank you for following up!\\n\\nYou are correct in thinking that our statistical testing does not directly address generalization across subjects. Generalization across subjects is notoriously difficult in brain imaging studies, due to small sample size, anatomical differences between subjects, and the fact that neural response patterns can be highly diverse across subjects. The statistical tests we report aim to test for significance per subject. The exact procedure we carry out (described specifically for Pereira 2018) is as follows:\", \"for_each_subject\": \"1. There are 384 stimuli sentences, corresponding to 384 fMRI recordings, a linear decoder is trained to map each recording to its corresponding LM-extracted (PRE, DF-*,GA-*) sentence representation. This is done using 12-fold cross-validation. This yields predicted \\u2018sentence representation\\u2019 per stimuli sentence. \\n2. To compensate for the small size of the dataset which might lead to a noise estimate of the linear decoder\\u2019s performance, we now randomly resample 384 datapoints (with replacement) from the full 384 datapoints.\\n3. For each resampling, our evaluation metrics (pearson r, average rank, etc.) are computed between the sampled predictions and their corresponding \\u2018gold representations\\u2019, for all sets of LM reps. We store the mean metric value (e.g. pearson r score) across the 384 \\u2018sampled\\u2019 datapoints. We run 3000 such iterations. \\n4. This gives us 3000 such paired scores across models. \\n5. We now run the signed rank test to test whether a given two models\\u2019 scores were drawn from populations having the same distribution. This returns a p-value. \\n6. After applying the Bonferroni correction for multiple hypothesis testing, this is the p-value we report. \\n\\nHaving said that, if we run the analysis you describe, applying a signed rank test to the by-subject mean scores below:\\n\\n| model/subject \\t| M2 \\t| M4 \\t| M7 \\t| M8 \\t| M9 \\t| M14 \\t| M15 \\t| P01 \\t|\\n|---------------\\t|-------\\t|-------\\t|-------\\t|-------\\t|-------\\t|-------\\t|-------\\t|-------\\t|\\n| **PRE** \\t| 0.312 \\t| 0.258 \\t| 0.285 \\t| 0.266 \\t| 0.246 \\t| 0.217 \\t| 0.285 \\t| 0.342 \\t|\\n| **GA UD** \\t| 0.325 \\t| 0.267 \\t| 0.294 \\t| 0.274 \\t| 0.259 \\t| 0.230 \\t| 0.286 \\t| 0.350 \\t|\\n| **GA DM** \\t| 0.316 \\t| 0.263 \\t| 0.292 \\t| 0.269 \\t| 0.253 \\t| 0.223 \\t| 0.282 \\t| 0.345 \\t|\\n| **GA UCCA** \\t| 0.303 \\t| 0.247 \\t| 0.279 \\t| 0.262 \\t| 0.240 \\t| 0.211 \\t| 0.271 \\t| 0.334 \\t|\\n\\n\\n- For PRE vs. **GA UD** we get an uncorrected p-value of 0.0078 (in line with your calculation, **GA UD** > PRE for all subjects) \\n- For PRE vs. **GA DM** we get an uncorrected p-value of 0.015\\n- For PRE vs. **GA UCCA** we get an uncorrected p-value of 0.0078 (here PRE > **GA UCCA** for all subjects)\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"Dear Reviewer #1, we thank you for your appreciation of our work and for your helpful suggestions on how to improve it.\\n\\nWe agree that expanding the scope of the experiments to other language models could potentially yield interesting conclusions regarding the interaction of structural bias with model size and architecture. We made a conscious choice to focus in this work on evaluating across multiple linguistic formalisms and on presenting results for more than one imaging dataset, since these two facets of our investigation were more immediately crucial to the core research questions. However, we see extensions along the \\u2018architecture/training objective\\u2019 dimension as an important next step that we would very much like to address in follow-up work.\\n\\nAnalysing the effect of structural bias on the models\\u2019 encoding of semantics could indeed potentially allow for a deeper understanding of the factors which lead to a better alignment with the brain recording data. The task proposed in Yaghoobzadeh et al., 2019 appears to perhaps be better suited for non-contextualized word embeddings, than for contextualized ones. However, we are currently running an analysis using the Semantic Tagging task ([1]), which involves assigning a one of 80 fine-grained \\u2018semantic tags\\u2019 which cover a broad range of semantic classes (e.g.: discourse relations, logical semantics Anaphora, named entities, etc.), and describe the \\u201csemantic contribution of the token with respect to the meaning of the source expression\\u201d. We will report the results of this analysis and include it in the paper over the next few days. We thank you for the suggestion!\", \"1\": \"https://www.aclweb.org/anthology/W17-6901/\"}",
"{\"title\": \"A couple of follow-up questions\", \"comment\": \"Thank you for the thoughtful response! I have a follow-up question regarding your p-values:\\n\\n>>Section 4.1 reports that UD and DM finetuned models are significantly better in brain decoding than the un-finetuned baseline, at p<0.0001, but the 95% confidence intervals for subject scores look very different. And the difference in mean decoding performance for DM finetuning is barely visible. How are you computing your confidence intervals and your p-values? Why are they so different, and how are you getting such high confidence in improvements over the unfinetuned baseline here?\\n\\n>The p-values are calculated on the basis of a Wilcoxon signed rank test, applied to the results of 3000 iterations of bootstrap resampling of the mean (across cross-validation splits) pearson\\u2019s r score per subject. So the differences between the DM/UD models and the pretrained baseline are significant at p<0.0001 per subject. The plot shows the mean decoding scores across cross-validation splits, bootstrap iterations, linear decoder runs, MLM (+ guided-attention) finetuning runs, and finally subjects. The confidence intervals are calculated based on this final quantity (subject scores), which is not the same quantity as the quantity used for computing the p-values we report (which are per subject). We will clarify this.\\n\\nI am still confused. Let's focus on Pereira2018 since that is the dataset where the decoding performance changes seems so small. This dataset has 8 subjects and I understand you are doing 12-fold cross-validation (for each subject, is that right?). What exactly is getting resampled in the bootstrapping process? What is the Wilcoxon signed rank test being applied to? And what does \\\"p<0.0001 per subject\\\" mean? How does your procedure test for strength of generalization across subjects?\\n\\nTo approach this from a different angle -- a simple test you could do is a paired t-test between the by-subject mean performance of (i) the GA model, and (ii) the model you're comparing it with. So, since you have 8 subjects worth of data you would have 8 pairs of values. Or you could use the signed rank test on these 8 subjects worth of data. From my own quick check, the lowest p-value you could get with the Wilcoxon signed rank test for 8 subjects is p=0.008 (if every subject's difference score consistently favors the GA model over the baseline) so I'm concerned that you may not be adequately testing for strength of generalization across subjects.\"}",
"{\"title\": \"Response to reviewer #3: Answers to technical questions (Part 2)\", \"comment\": \"##### Answers to technical questions: #####\\n\\n\\n*How is the split of a word into word pieces handled in the adjacency matrix representing word\\u2013word dependencies?*\\n- We align word pieces with their corresponding words, then each word piece that is part of a word is included in the dependency. e.g. if Word1 is made up of {w-pieceA, w-pieceB} and Word2 of {w-pieceF, w-pieceG} and Word1 and Word2 have a dependency in the word adjacency matrix, then the word piece adjacency matrix we build will have w-pieceA and w-pieceB each connected to both w-pieceF, and w-pieceG (and vice-versa, since edge directionality is not considered in our setup). \\n\\n*How are the adjacency matrix and each head's attention weight matrix converted into a distribution for computing cross-entropy loss? Are the entries normalized globally? By row? By column?*\\n - The entries are normalized globally, i.e. across the entire attention weight matrix. \\n\\n*What are the perplexities like for domain-finetuned (no structural attention constraint) BERT? These are missing from Table 2 (Appendix B), but are potentially important in interpreting your results.*\\n- Please see above (part 1 of response). \\n\\n*Section 4.1 reports that UD and DM finetuned models are significantly better in brain decoding than the un-finetuned baseline, at p<0.0001, but the 95% confidence intervals for subject scores look very different. And the difference in mean decoding performance for DM finetuning is barely visible. How are you computing your confidence intervals and your p-values? Why are they so different, and how are you getting such high confidence in improvements over the unfinetuned baseline here?*\\n- The p-values are calculated on the basis of a Wilcoxon signed rank test, applied to the results of 3000 iterations of bootstrap resampling of the mean (across cross-validation splits) pearson\\u2019s r score per subject. So the differences between the DM/UD models and the pretrained baseline are significant at p<0.0001 per subject. The plot shows the mean decoding scores across cross-validation splits, bootstrap iterations, linear decoder runs, MLM (+ guided-attention) finetuning runs, and finally subjects. The confidence intervals are calculated based on this final quantity (subject scores), which is not the same quantity as the quantity used for computing the p-values we report (which are per subject). We will clarify this. \\n\\n*How do your results compare to those using the best fine-tuning methods from Gauthier & Levy (2019), which involve scrambling the input sentences?*\\n- In initial experiments, we find that using final layer hidden state mean pooling instead of the \\u2018[CLS]\\u2019 token yields representations which can be mapped with significantly higher Pearson\\u2019s r and lower Average Rank (AR, please see response to reviewer four for description) scores on Pereira 2018. Therefore, the decoding scores we report for the baseline \\u2018pretrained BERT\\u2019 are already an \\u2018improvement\\u2019 compared to the best finetuning results reported in Gauthier & Levy (2019), which we also independently replicate (pearson\\u2019s r \\u2248 25.5 vs. pearson\\u2019s r \\u2248 27.6 || AR \\u2248 32 vs. \\u2248 38)). We do not subsequently run a direct comparison of finetuning methods -- that is, using an equivalent representation extraction method -- because the conclusions which can be drawn from such a comparison are not immediately transparent: changes in decoding score, would likely be due to the influence of a variety of different factors in the two experiments. There is, however, potentially a rather interesting connection to be explored there, since there is recent work ([1]) showing that \\u2018composition\\u2019 drives the brain\\u2019s language network, even with the violation of grammatical word order restrictions, as long as local dependencies are preserved. \\n\\n*Given that in Wehbe2014 each fMRI image corresponds to four words, most of which probably contain both function and content words, how is the content/function word analysis defined and performed?*\\n- During decoding, each \\u2018target representation\\u2019 (BERT hidden state corresponding to a word) either represents a content word or a function word. Although the source representation (fMRI recording corresponding to a two second time interval and four words) does more likely than not include a combination of both content and function words, the analysis is conducted on the basis of the target representations, and therefore it is possible to separately evaluate the decoding performance for each word contained in a given time interval, e.g. how well can the hidden representation of each word in the phrase \\u2018win that Quidditch Cup\\u2019 be decoded from the fMRI recording representing the phrase.\", \"1\": \"https://www.mitpressjournals.org/doi/full/10.1162/nol_a_00005\"}",
"{\"title\": \"Response to reviewer #3: thanks, clarifications (Part 1)\", \"comment\": \"Dear reviewer #3, we truly appreciate your comprehensive and thoughtful review, which has already helped us improve this work. Please let us know if you have any additional comments or questions.\\n\\n#### Regarding word perplexity, there are two important clarifications to make: ####\\n\\n- We have found an explanation for the anomalously high perplexity scores. In the results reported in Table 2, the exponentiation of the log-likelihood term is being applied per sentence (i.e. over the average word log-likelihood per sentence), rather than over the entire dataset. \\n\\n- The results in Table 2 are actually for the domain-finetuned baselines, i.e. the models fine-tuned on each formalism\\u2019s corpus without the structural attention constraint. This was not sufficiently clear. \\n\\nWe have now adjusted the method by which the perplexity was being calculated, and included the results for both the domain-fintuned baselines and the structurally biased models. Please find the results below (we will also update them in the appendix):\\n* PRE.: pretrained\\n* DF-B: domain-finetuned baseline\\n* GA: guided attention finetuning\\n\\n\\n*Pereira et al. (2018)*\\n----------------------- ------- \\n**PRE** 14.09 \\n----------------------- ------- \\n**DF-B DM** 19.11 \\n\\n**DF-B UD** 19.08 \\n\\n**DF-B UCCA** 20.67 \\n----------------------- ------- \\n**GA DM** 20.82 \\n\\n**GA UD** 17.15 \\n\\n**GA UCCA** 17.47 \\n\\n*Wehbe et al. (2014)*\\n----------------------- ------- \\n**PRE** 34.79\\n----------------------- ------- \\n**DF-B DM** 36.11\\n\\n**DF-B UD** 38.41\\n\\n**DF-B UCCA** 40.45\\n----------------------- ------- \\n**GA DM** 33.24\\n\\n**GA UD** 37.16\\n\\n**GA UCCA** 33.60\", \"we_now_observe_the_following\": [\"Our main conclusion re. the effect of domain remains unchanged: simply running MLM finetuning on each of the texts of the three datasets (UD, DM, and UCCA) leads to higher perplexity scores on the fMRI stimuli texts. Moreover, except in the case of DM for Pereira 2018, the models finetuned via MLM + guided attention (GA), have lower perplexities than their domain-finetuned baseline counterparts.\", \"As you correctly note, there is, overall, no clear correspondence between lower perplexity and higher brain decoding scores -- although we find a tendency for the domain-finetuned baselines, where a higher decoding score (descending rank, P2018: UD > DM > UCCA; W2014: DM > UD > UCCA) corresponds to a lower perplexity (ascending ranking, P2018: UD > DM > UCCA; W2014: DM > UD > UCCA). This does not hold for the structurally biased models (as domain is, perhaps, no longer the primary factor involved).\"]}",
"{\"title\": \"Response to reviewer 4: thanks and comments\", \"comment\": \"Dear Reviewer #4, we thank you for your helpful comments and feedback.\", \"regarding_the_evaluation_metrics\": \"We report pearson\\u2019s r correlation, employing it as a bounded, invariant measure of representational similarity. In general, this is of course, yes, vulnerable to \\u2018trivial gaming\\u2019, as instanced in your all zeroes example. In our case, however, there is little risk of that occurring, as:\\n\\nA) The models are not directly fine-tuned to become more similar to B_i, so should not learn a 'trivial solution'.\\n\\nB) Even if there could still, theoretically, be a confound where D_fr becomes more \\\"simple\\\"/trivially predictable due to fine-tuning, we believe this is clearly not the case, as the fine-tuned models are able to induce representations which outperform the non-fine-tuned BERT on the targeted-syntactic evaluation tasks.\\n\\nFurthermore, we have also computed the rank-based metric from Gauthier and Levy which gives the rank of a ground-truth sentence representation in the list of nearest neighbors (computed via cosine similarity) of a predicted sentence representation. We found a strong correspondence between this and the metric we have reported in the paper (which was more stable across subjects, and between datasets), therefore omitted it from the paper for the sake of clarity and space. However, you are correct that including it would offer a more complete picture. We thank you for raising this point. Please find these results for Wehbe 2014 in the table below (we will add this and a similar table for Pereira 2018 to the appendix):\\n\\nPre.: pretrained\", \"df\": \"domain-finetuned\", \"ag\": \"attention-guided finetuning\\n\\nWehbe 2014 (Mean and Median ranks are out of a total of 4369 words in dataset):\\n\\n| Model/Metric \\t| Pearson r \\t| Mean Rank \\t| Median Rank \\t|\\n|------------------\\t|-----------\\t|-----------\\t|-------------\\t|\\n| Pre. \\t| 0.225 \\t| 436.70 \\t| 53.13 \\t|\\n|------------------\\t|-----------\\t|-----------\\t|-------------\\t|\\n| Df-baseline-dm \\t| 0.204 \\t| 493.11 \\t| 89.32 \\t|\\n| Df-baseline-ud \\t| 0.206 \\t| 497.24 \\t| 81.69 \\t|\\n| Df-baseline-ucca \\t| 0.164 \\t| 689.89 \\t| 227.30 \\t|\\n|------------------\\t|-----------\\t|-----------\\t|-------------\\t|\\n| Ag-dm \\t| 0.343 \\t| 172.45 \\t| 10.96 \\t|\\n| Ag-ud \\t| 0.280 \\t| 255.127 \\t| 18.28 \\t|\\n| Ag-ucca \\t| 0.261 \\t| 315.73 \\t| 25.78 \\t|\\n\\nThe table shows that the models which have higher Pearson r scores, also have a lower average ground truth word/sentence nearest neighbour rank i.e. induce representations that better support contrasts between sentences/words which are relevant to the brain recordings. We hope that this adequately addresses your unease re. the methodology of evaluation.\\n\\nRegarding the first point, we would like to respectfully dispute the characterization of the work\\u2019s contribution as incremental: A) we present a novel approach which allows for targeted evaluation of particular structural hypotheses from linguistic theory regarding the composition of meaning in the brain; B) utilising this, we conduct a carefully controlled evaluation involving three different syntactic and semantic linguistic formalisms across two fMRI datasets of different granularities; C) we then present an analysis of a variety of factors including textual domain, ability to model different syntactic constructions, and word class (content vs. function).\\n\\nNaturally, we agree that a deeper analysis is of interest. The scope of our analysis is necessarily restricted both by space and the amount of information one can reasonably include in an already packed work. We are currently conducting a fine grained analysis of the fine-tuned and non-fintuned models\\u2019 representation of semantic information, as suggested by Reviewer #1, and will include it.\"}",
"{\"title\": \"Review of \\\"DOES INJECTING LINGUISTIC STRUCTURE INTO LANGUAGE MODELS LEAD TO BETTER ALIGNMENT WITH BRAIN RECORDINGS?\\\"\", \"review\": \"This paper describes experiments that inject linguistic information (for example dependency structures) into BERT, then measure improvements in correlation with FMRI measurements of humans reading an underlying sentence (which is also analyzed by BERT). Linguistic information is incorporated by biasing attention heads to line up with dependency (or other) structures.\", \"positives_about_the_paper\": \"it's an interesting experiment to try, and an important direction of work.\", \"negatives\": [\"It's a somewhat small increment over previous work, not sure it merits a full conference paper. As it stands the paper presents the approach and results, with little inspection of why improvements are seen. I would like the authors to go much deeper with the analysis. Are there particular syntactic constructions that are being better modeled? Is the new model much more sensitive to long range dependencies, as found in syntactic structures? Are particular classes of words effected more than others? Answering these questions will be challenging but would add a lot to the paper.\", \"Most importantly, the evaluation metrics are unclear. The critical sentence in the paper is \\\"To evaluate the regression models, Pearson\\u2019s correlation coefficient between the predicted and the corresponding heldout true sentence or word representations is computed\\\". This is a terse description of a critical part of the approach, and I can't make sense of it.\", \"Part of my unease about the evaluation is the following. The matrix $D_{fr}$ is the output from BERT. Importantly in The definition of L_{ifr} this matrix is predicted from the \\\"brain\\\" matrix B_i. If $D_{fr}$ was all zeros it would be trivially predictable (and hence gameable). In the original Gauthier and Levy paper they appear to use metrics in addition to MSE. In this paper some variant of Pearson's correlation coefficient is used - but I can't understand what exactly this is, and my worry is that it is trivially gameable in the same way as MSE.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Does it work the same way on other LM?\", \"review\": \"An interesting paper that discusses whether injecting three types of syntactic and semantic formalisms lead to better alignment with how language is processed in the brain. The authors conduct experiments with the BERT model and two fMRI datasets and show that including linguistic structure through fine-tuning can improve brain decoding performance.\\n\\nThe paper would be improved by experimenting with language models other than BERT, as it is not clear at the moment whether the produced results are generalizable to different language models or are BERT-specific. For example, additional experiments with AlBert, distilBert and RoBerta would provide additional insights on the effect of size of the model, in terms of the number of parameters. Comparison of Bert to GPT and XLNet would emphasize the advantages/disadvantages of autoencoder-based vs autoregressive models and could potentially provide additional insight on how attention is represented in human brain.\\n\\nIt would also be interesting to read a discussion of semantic analysis, as currently the paper concentrates the most on syntactic formalism as represented in both BERT and fMRI data. Specifically, it would be interesting to know if the injection of syntax impacts the semantic representations. One of the possible methods to measure that would be probing for semantic classes (as in Yaghoobzadeh et al., 2019. Probing for Semantic Classes)\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"An interesting paper with an interdisciplinary appeal. In general well-executed, well-written study, with some (minor) issues.\", \"review\": \"This paper tests whether fine-tuning large pre-trained language models\\n(LMs) with structural information can increase the correlation between\\nthese representations and the representations of brain activity\\nmeasured while processing the same stimuli. The injection of the\\nstructural information is done through fine-tuning of the pre-trained\\nmodel by \\\"guided attention\\\", which makes use of binary relations\\nbetween the words according to three different syntactic or semantic\\nformalisms. The authors map the brain activity to each of the\\nalternative LM representations via a regression model, and measure the\\nalignment by using correlation between the predicted (from brain\\nactivity) and actual output of the alternative models. The results\\nshow that under certain conditions representations learned through\\nguided attention aligns better with the representations of brain\\nactivity.\\n\\nIn general the investigates an interesting question which may be\\n(eventually) relevant to both understanding the way humans process\\nlanguage, and possibly building better computational models. The\\nmethod followed in the study is (mostly) sound, and the paper is\\nwritten well.\\n\\nA potential issue with the method is the direction of the prediction\\nin \\\"brain decoding\\\" regression (section 3.5). Authors predict the\\nmodel representations from the \\\"brain representation\\\" (this seems to\\nbe based on earlier studies, but I did not verify). In my opinion the\\nreverse is more meaningful. Since the invariant quantity in this study\\nis the representations coming from the brain imaging. This is\\nimportant, because the success of the regression is not only about the\\namount of information in the predictors, but also simplicity of the\\ntask. Hence, an alteration of the model representations that\\nsimplifies them may result in better predictions, and hence, higher\\ncorrelations\\n\\nExcept the above, I have some (mostly minor) comments:\\n\\n- I would be happier with a bit more explicit discussion of the main\\n results. After reading the articles, I am still not sure what to\\n take away from the main experiments. The effect on two different\\n data sets (also means representations at different levels/units) are\\n quite different - not allowing a clear conclusion. Side issues\\n discussed (the effects of the use of different formalisms, the\\n effect of domain, particular syntactic patterns, content vs.\\n function words are also relatively brief and far from being\\n conclusive). I think a clearer discussion of the main results, and\\n investigation of reasons for the discrepancy between the data sets\\n would make the paper stronger.\\n\\n- It would help if the data is explained slightly better.\\n Particularly, it would make the paper more self contained if the\\n authors specify whether any of the data sets (section 3.3) had\\n automatic annotation. On a somewhat related note, comparisons\\n between the formalisms seem to correlate with the data sizes, which\\n is not pointed out in the paper.\\n\\n- A few language/typography issues/suggestions: \\n\\n - I am not sure about the ICLR guidelines, but avoiding citations\\n in the abstract is a good idea (abstracts should stand alone).\\n - Footnote marks should go after punctuation (footnote mark 8)\\n - Conclusions line 3: \\\"attention guided\\\" -> \\\"guided attention\\\" ?\\n - There are case (normalization) issues in the references:\\n \\\"groningen\\\", \\\"erp\\\", \\\"bert\\\" (not exhaustive, a through check is\\n recommended).\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Provocative paper, but several technical concerns\", \"review\": \"Summary of paper: the authors explore adding a soft structural attention constraint to BERT, by penalizing attention weights that are substantially different from a head\\u2013dependent \\\"adjacency\\\" matrix derived from dependency parses. BERT is then fine-tuned with and without (\\\"domain-finetuned\\\") this constraint on corpus data for which fMRI recordings from participants during reading are available. A linear classifier from the final layer of BERT's embedding (mean-pooled) is then learned to the fMRI data. Within this pipeline, domain-finetuned models are not an improvement over unfinetuned BERT, but fine-tuning with the structural attention constraint improves decoding to fMRI data, especially for word-level data (the Wehbe2014 dataset).\", \"assessment\": \"this is a nice paper that investigates an intuitive method of incorporating syntax-based, structural soft attention constraints into Transformer encoder models for language. What makes the contribution fairly distinctive is evaluation on alignment with human fMRI recordings during comprehension of the texts. The results show improvements in decoding relative to baseline models that involve no fine-tuning and/or domain-adaptation fine-tuning alone (no structural attention constraints), especially for fMRI data that are recorded below the sentence level. The authors also evaluate the effect of fine-tuning on targeted syntactic evaluations from Marvin & Linzen; the results here are not particularly conclusive. Overall, this is a potentially solid, if not ground-breaking, contribution. However, there are a number of technical questions that are left unclear in the submission, and some of the results are cause for some concern. These concerns need to be addresed in order for the submission to be fully satisfactory.\\n\\nThe single biggest concern is the extraordinarily high word perplexity scores in Table 2 for Wehbe2014 -- which get much, much worse after fine-tuning. It is important to understand what's going on here in order to make sense of the core potential contribution of the paper, because it's only in the Wehbe2014 dataset where there seem to be appreciable improvements in decoding performance. I would guess that the high perplexity comes from poor prediction of the proper nouns in the Harry Potter book chapter. Maybe there needs to be some amount of fine-tuning of the models to the domain of the test-set corpus. Overall, the paper needs more clarity on why it is only the Wehbe2014 dataset where the perplexity is so high and the fine tuning affects decoding performance so much.\", \"additional_technical_questions\": \"1) How is the split of a word into word pieces handled in the adjacency matrix representing word\\u2013word dependencies?\\n\\n2) How are the adjacency matrix and each head's attention weight matrix converted into a distribution for computing cross-entropy loss? Are the entries normalized globally? By row? By column?\\n\\n3) What are the perplexities like for domain-finetuned (no structural attention constraint) BERT? These are missing from Table 2 (Appendix B), but are potentially important in interpreting your results.\\n\\n4) What words are pooled over for the Wehbe2014 analyses -- the four words in the 2-second window?\\n\\n5) Section 4.1 reports that UD and DM finetuned models are significantly better in brain decoding than the un-finetuned baseline, at p<0.0001, but the 95% confidence intervals for subject scores look very different. And the difference in mean decoding performance for DM finetuning is barely visible. How are you computing your confidence intervals and your p-values? Why are they so different, and how are you getting such high confidence in improvements over the unfinetuned baseline here?\\n\\n6) How do your results compare to those using the best fine-tuning methods from Gauthier & Levy (2019), which involve scrambling the input sentences?\\n\\n7) Given that in Wehbe2014 each fMRI image corresponds to four words, most of which probably contain both function and content words, how is the content/function word analysis defined and performed?\", \"additional_comments\": [\"the authors write that \\\"increase in perplexity roughly corresponds to lower brain decoding scores\\\", but this doesn't look consistent with Table 2 and Figure 3. For Wehbe2014, UCCA data yield the worst decoding accuracy but yield better perplexity than DM data, which yield decoding accuracy only slightly worse than the UD data. The monotonicity is cleaner for Pereira2018 but the overall differences in decoding performance are much smaller.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
VD_ozqvBy4W | CoCon: A Self-Supervised Approach for Controlled Text Generation | [
"Alvin Chan",
"Yew-Soon Ong",
"Bill Pung",
"Aston Zhang",
"Jie Fu"
] | Pretrained Transformer-based language models (LMs) display remarkable natural language generation capabilities. With their immense potential, controlling text generation of such LMs is getting attention. While there are studies that seek to control high-level attributes (such as sentiment and topic) of generated text, there is still a lack of more precise control over its content at the word- and phrase-level. Here, we propose Content-Conditioner (CoCon) to control an LM's output text with a content input, at a fine-grained level. In our self-supervised approach, the CoCon block learns to help the LM complete a partially-observed text sequence by conditioning with content inputs that are withheld from the LM. Through experiments, we show that CoCon can naturally incorporate target content into generated texts and control high-level text attributes in a zero-shot manner. | [
"Language modeling",
"text generation",
"controlled generation",
"self-supervised learning"
] | Accept (Poster) | https://openreview.net/pdf?id=VD_ozqvBy4W | https://openreview.net/forum?id=VD_ozqvBy4W | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"to07RT7J8dh",
"Iap5rgKtiJU",
"0bEORr5KfNK",
"2NMK13cXrfb",
"c5-KomZbVWj",
"-S52XONMcYo",
"mZfFM6-J545",
"uF0yKyBFFl",
"eX0a5RCiQ8d",
"sE3wJONeRYr",
"TUtAzEx7ed_",
"Rzo0FTg0mUv",
"N3Nq1xKQy3"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040436933,
1606109829833,
1606109696498,
1606109635191,
1606109339517,
1606109131652,
1606109107038,
1606108568932,
1606107915154,
1603900839102,
1603900138986,
1603864891826,
1603835482069
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3544/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3544/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3544/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3544/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3544/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3544/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3544/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3544/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3544/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3544/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3544/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3544/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Poster)\", \"comment\": \"The paper aims at controllable generation by introducing an additional \\\"content-conditioner\\\" block in the Transformer models. The paper further provides 4 different variants of a pre-training task to train the content-conditioner model. \\n\\nWhile the proposed approach seems an incremental contribution over CTRL and PPLM, certain reviews praised the approach being novel while keeping the architecture changes minimal. Overall, reviews indicate that the overall proposed method of fine-grained controlled generation with self-supervision is valuable, and empirical results support its effectiveness. \\n\\nAll reviewers initially raised concerns regarding clarity and lack of human evaluation. However, clarity issues seem to be resolved through author/reviewer discussions and the updated revision.\\n\\nR3 had important concerns regarding topic and sentiment relevance evaluations. \\nWhile the reviewer remains unconvinced after discussions with authors, after carefully reading the revised paper and discussions, I feel that the authors tried to address this point fairly through their additional experiments and also edited their contribution statement accordingly.\\n\\nOverall, at least two reviewers sounded very excited about this work and other than R3's concerns, the general sentiment about this work was positive. Therefore, I recommend weak accept. \\n\\nThere are still some writing issues that I strongly encourage authors to carefully address in the future versions. Quoting from reviewer discussions:\\n\\n> Differentiability of the adversarial loss. Authors just added one statement saying \\\" Through continuous approximation..\\\" without any more details are given, which continuous approx was used (Gumbel softmax?) and how they overcame the problem of its training instability. \\n\\n> Table 6, can be misleading, authors bold the results when cocon+ is performing better than baselines (mostly in content similarity) but not the other way around topic/sentiment accuracy. The latter is arguably more important.\"}",
"{\"title\": \"Overall Response to All Reviewers\", \"comment\": [\"We would like to thank all the reviewers for the insightful and valuable comments. We have revised our paper based on the comments and provided the individual response to each reviewer. A summary of the key revision is presented here:\", \"Added human evaluation results for topic/sentiment relevance and fluency of CoCon and baselines\\u2019 text generation (Table 5 & 6).\", \"Added Figure 2 to better explain CoCon\\u2019s cycle reconstruction loss.\", \"Added human and automatic evaluation to better study CoCon\\u2019s versatility of conditioning on unseen content input on top of topic/sentiment control (Table 6 & 7)\", \"Edited Figure 1 and Section 3 to improve clarity.\", \"Incorporated discussions and clarification pointed out by reviewers in the main text body of the manuscript.\"]}",
"{\"title\": \"Response to Review 1 (Part 2)\", \"comment\": \"e) \\u201cWhy does training on (human-written) Webtext instead of (machine-written) outputs of GPT-2 decrease the text quality? Wouldn't we expect the opposite?\\u201d\\n\\nWe have since conducted and added human evaluation of fluency to further investigate this observation. Training on (human-written) Webtext instead of (machine-written) outputs of GPT-2 also decreases the human-perceived fluency score slightly. We speculate that since the (machine-written) outputs of GPT-2 are generated by the LM itself, CoCon\\u2019s prediction can more easily match the training labels, hence helping CoCon to better converge during training, consequently generating texts of slightly higher quality.\\n\\n\\nf) \\u201cThe above three questions lead me to believe that using perplexity on GPT-1 might not be a suitable metric to judge text quality in these scenarios. Could you please provide more arguments why you believe a human study is not needed here?\\u201d\\n\\nWe thank the reviewer for the suggestion. We have since added human evaluation after gaining approval to conduct them recently.\\n\\ng) \\u201cThe model extensions listed under 4.4 are very interesting. The paper would be even stronger if you had quantitative experiments for these..\\u201d\\n\\nWe thank the reviewer for the helpful suggestion. We have since added more experiments to show more quantitative results for multiple content inputs by using GPT-2 output texts as an additional content input on top of topic/sentiment content inputs to show the flexibility of CoCon to control both high-level attributes and content of the generation. This dual content input generations from CoCon are labeled as CoCon+ in our revised manuscript. Table 6 and Table 7 show CoCon+ \\u2018s effectiveness in controlling the text\\u2019s content versus other baselines when evaluated with human and automatic metrics respectively. The higher content similarity to the additional GPT-2 passage content input shows that CoCon+ can generate text that is more similar to generic content inputs (GPT-2 passage) than other controlled text generation methods which share similar prompt text and target attributes.\\n\\n\\nh) \\u201cFor example, couldn't you apply your model to (gradual) sentiment transfer by conditioning CoCon on the input text as well as the target sentiment (\\\"is perfect\\\"), weighted by \\u03c4content? Even if the results were not very good compared to the state-of-the-art in sentiment transfer, such an experiment could show off the versatility of CoCon compared to PPLM.\\u201d\\n\\nWe thank the reviewer for the constructive comment. We have conducted additional experiments to study the suggested example in a similar spirit. In Table 6 where we compared CoCon with CoCon+ (described in response to (g)) where an additional content input is used, the influence from the original topic/sentiment content input is reduced as shown by CoCon+\\u2019s lower accuracy %. However, CoCon+\\u2019s topic/sentiment transfer is still present as shown in the higher CoCon+\\u2019s accuracy versus the un-conditioned GPT-2 baseline, also in Table 6. Moreover, we observe that CoCon+\\u2019s texts have higher content similarity to the additional GPT-2 passage content input, showing show off the versatility of CoCon to condition on multiple generic content inputs compared to PPLM.\\n\\ni) \\u201cMinor suggestions\\u201d\\n\\nWe thank the review for point this typo and have corrected it accordingly.\\n\\n[1] Unsupervised text style transfer using language models as discriminators. NeurIPS 18\"}",
"{\"title\": \"Response to Review 1 (Part 1)\", \"comment\": \"We thank the reviewer for the positive and helpful comments. Please refer to the following for our response (in 2 parts):\\n\\na) \\u201cThe experimental section could be more thorough. For example, several aspects of the model are only evaluated qualitatively, and I don't find the examples very convincing. Moreover, some of the results are difficult to interprete or non-conclusive. The paper could benefit from a human evaluation.\\u201d\\n\\nWe thank the reviewer for the helpful feedback. We are happy to share that we have gained approval to conduct the human evaluation (after the ICLR submission deadline) and have since added human evaluation results (Table 5 and 6) to better evaluate CoCon\\u2019s effectiveness in controlling high-level attributes such as sentiments and topics as well as fluency perceived by humans. The human evaluation corroborates the results from automatic evaluation where CoCon displays better control over topic- and sentiment-relevant generations than other controlled generation baselines, albeit with a slight tradeoff in fluency.\\n\\nb) \\u201cIn the CoCon Setup you report to split \\u2026 somewhere between the 8th and 12th BPE position. Why is this sufficient? Wouldn't we expect the model to perform poorly on prefixes that are not between 8 and 12 BPE tokens long?\\u201d\\n\\nWe thank the reviewer for the discussion. We selected somewhere between 8th and 12th BPE position to strike a balance between learning convergence and generalization. On one hand, it would be easier for CoCon to learn to reconstruct the generation if the content input phase is very short, however, CoCon might not generalize well to longer content input during inference. On the other hand, using a long content input might make it challenging for CoCon to reconstruct the long generation faithfully at the start of the training, potentially causing an issue in training convergence. Indeed, we were surprised that CoCon is able to generalize well to both very short (one word for topic/sentiment control) and longer content input (GPT-2 passages in the new CoCon+ results) during inference, showing comparable human and automatic fluency scores in our experiments.\\n\\nc) \\u201cTable 2 suggests that CoCon without the adversarial loss achieves the best performance, drastically improving on content similarity while retaining comparable text quality and diversity. This makes me wonder why the adversarial term was introduced in the first place, and why it is apparently used in the other two experiments.\\u201d\\n\\nWe included an adversarial loss in CoCon as it has been shown to improve fluency for text generation in prior work [1]. Since we have not been able to conduct a human evaluation before the initial submission due to approval issues, we included the adversarial loss in the other two experiments with speculation that it may benefit human-perceived fluency since the perplexity scores are close while automatic and human evaluations of fluency sometimes contradict each other. From our newly added human evaluation (Table 8 in the appendix of revision), we observe that humans do perceive CoCon without adversarial loss as more fluent, corroborating the findings from their perplexity score. We speculate that the addition of another adversarial loss to the existing set of other training objectives has a slightly counterproductive effect by making it more challenging for the CoCon model to converge in its training. We have included this discussion in the revised manuscript in the \\u201cResults\\u201d section of 4.1: \\u201cIn our human evaluation (Table~8 of Appendix), we observe that humans also perceive CoCon without $\\\\mathcal{L_{\\\\text{adv}}}$ as more fluent, indicating that the addition of $\\\\mathcal{L}_{\\\\text{adv}}$ may have made it more challenging for the CoCon model to converge in its training.\\u201d\\n\\nd) \\u201cWhy is the perplexity of CoCon (and PPLM) consistently lower than the perplexity of the baseline LM GPT-2? Shouldn't we expect a trade-off between controllability and text quality? In the PPLM paper, the perplexity of the baseline is consistently (slightly) lower than that of PPLM.\\u201d\\n\\nWe thank the reviewer for bringing this up for discussion. We speculate that the perplexity of the baseline LM GPT-2 is lower as it is evaluated on the GPT model which has different model architecture and hence different bias compared to the GPT-2 model for generated tokens. In our newly added human evaluations (Table 5 of the revised manuscript), we observe that the baseline LM GPT-2 indeed has the highest fluency score across the topic/sentiment controlled generation, aligned with the trade-off between controllability and text quality we would expect.\"}",
"{\"title\": \"Response to Review 2\", \"comment\": \"We thank the reviewer for the positive comments and feedback. Please refer to the following for our response:\\n \\n\\na) \\u201cwhat does \\\"competitively\\\" mean here?\\u201d\\n\\nWe thank the reviewer for point this out for clarification. We initially use \\u201ccompetitively\\u201d to mean that CoCon can outperform the baselines in most of our initial experiments. We have since changed the phasing of the core contribution to the following for more clarity: \\n\\n\\u201cThrough ablation studies and comparisons with strong baselines like PPLM and CTRL, we investigate how CoCon effectively influences the content of generation and can competitively control high-level text attributes such as topic and sentiment.\\u201d\\n\\n\\nb) \\u201cPage 5. cycle reconstruction loss. It would be helpful to give an example, otherwise it's a bit hard to see how cycle recon could have helped.\\u201d\\n\\nWe thank the reviewer for the constructive comment. We have since added Figure 2 with an example to improve the understanding of cycle reconstruction loss and better contrast it with self reconstruction.\\n\\nc) \\u201cOverall speaking, the choice of content input for all examples are weird. Why do we use partial phrases without a clear meaning or subject as the content hint?\\u201d\\n\\nWe used partial phrases for the experiments in \\u201cSection 4.1: Content Similarity\\u201d to study how CoCon can condition on generic content input in a large scale manner. For topic and sentence control (Section 4.2 and 4.3), the content inputs are control code words and sentiment markers used in previous methods respectively.\\n\\nIn our newly added experiments and results, we use GPT-2 output texts (instead of partial phases) as the second content input on top of the topic/sentiment content input to better study how CoCon can generate text of content similarity of unseen content input, marked as CoCon+ in the revised manuscript. Table 6 and Table 7 show CoCon+ \\u2018s content similarity with the conditioning GPT-2 text when evaluated with human and automatic metrics respectively. The higher content similarity to the additional GPT-2 passage content input shows that CoCon+ can flexibly generate text that is more similar to generic content inputs (GPT-2 passage) than other controlled text generation methods which share similar prompt text and target attributes.\\n\\n\\nd) \\u201cshould it be \\\"at the content level\\\"?\\u201d\\n\\nWe thank the reviewer for point this out. Indeed, that would be better. We have edited accordingly.\\n\\ne) \\\"unlikely co-occurs\\\" -> \\\"unlikely to co-occur\\\"\\n\\nWe thank the reviewer for the suggestion and have edited it accordingly.\"}",
"{\"title\": \"Response to Review 4 (Part 2)\", \"comment\": \"f) \\u201cThe self-reconstruction loss, by itself, appears to be problematic. Indeed, a model trained only on this loss might just learn to copy the conditioning text, thus destroying fluency and generalization. This should be explicitly discussed (instead of leaving the point more or less implicit in the second paragraph of section 4.4\\u201d\\n\\nWe thank the reviewer for the helpful suggestion. We have since edited and included the following sentences (in Section \\u201cSelf Reconstruction Loss\\u201d in Section 3.1) to better discuss this issue and how it is been addressed in CoCon\\u2019s $\\\\mathbf{c}$-mask in self reconstruction training to avoiding learning to copy text directly: \\u201cTo avoid trivializing the prediction of the next token $x_{i+1}$ during training, we apply a self-token $\\\\mathbf{c}$-mask at CoCon's attention layer such that $\\\\mathbf{h}'_{i}$ does not attend to the token $x_{i+1}$ in $\\\\mathbf{c}$ it is trying to predict.\\u201d \\n\\nWe also make it more explicit, in \\u201cCycle Reconstruction Loss\\u201d paragraph of Section 3.1, that cycle reconstruction loss is proposed for CoCon to generalize when $\\\\mathbf{c}$ is different from the generation label: \\u201cThe self reconstruction loss relies on CoCon content input ($\\\\mathbf{c}$) and initial prompt text ($\\\\mathbf{p}$) originating from one single text sample. To encourage generalization on cases where $\\\\mathbf{c}$ and $\\\\mathbf{p}$ are from divergent text sources, we employ a cycle reconstruction training that utilizes two different training samples\\u201d \\n\\ng) \\u201cIn particular, you should give more intuition/motivation for the Cycle Reconstruction Loss, which I did not really understand.\\u201d\\n\\nWe thank the reviewer for the constructive comment. We have since added Figure 2 with an example to improve the understanding of cycle reconstruction loss and better contrast it with self reconstruction.\\n\\n\\nh) \\u201cThe results are difficult to interpret, in particular due to the not very clearly formalized control objective (do you want the generated text to contain literal parts of the conditioning text (apparently not), or to have some semantic similarity with the conditioning text (apparently yes, but you do not explicitly mention or define semantic similarity)? It is difficult for the reader to really assess the quality of the results. Here, a human evaluation with a clear evaluation protocol would really be useful.\\u201d\\n\\nWe thank the reviewer for the helpful suggestion. We have since added human evaluation on the semantic similarity with the conditioning text. Indicated as CoCon+ in the revised manuscript, on top of the (first) target topic/sentiment content input, we also condition CoCon on GPT-2 output text as the second content input. These GPT-2 output texts are generated from the same prompt text as CoCon and the other baselines. Table 6 and Table 7 show CoCon+ \\u2018s content similarity with the conditioning GPT-2 text when evaluated with human and automatic metrics respectively. The higher content similarity to the additional GPT-2 passage content input shows that CoCon+ can flexibly generate text that is more similar to generic content inputs (GPT-2 passage) than other controlled text generation methods which share similar prompt text and target attributes.\\n\\n\\n[1] Ankur Bapna, Naveen Arivazhagan, and Orhan Firat. Simple, scalable adaptation for neural machine translation. EMNLP-IJCNLP 2019\"}",
"{\"title\": \"Response to Review 4 (Part 1)\", \"comment\": \"We thank the reviewer for the constructive and detailed comments. Please refer to the following for our response (in 2 parts):\\n\\na) \\u201cThe main idea is actually pretty simple but the reader has to wait until the end of page 4 (Self Reconstruction Loss) to be able to understand it\\u201d\\n\\nWe thank the reviewer for the helpful feedback. We have edited the following sentence in the introduction to better introduce CoCon\\u2019s self reconstruction loss earlier on:\", \"original\": \"\\u201cTo avoid trivializing the prediction of the next token $x_{i+1}$, we apply a self-token $\\\\mathbf{c}$-mask at CoCon's attention layer such that $\\\\mathbf{h_{i}}'$ does not attend to values computed from $\\\\mathbf{h_{i+1}}$.\\u201d\\n-->\", \"revised\": \"\\u201cTo avoid trivializing the prediction of the next token $x_{i+1}$ during training, we apply a self-token $\\\\mathbf{c}$-mask at CoCon's attention layer such that $\\\\mathbf{h_{i}}'$ does not attend to the token $x_{i+1}$ in $\\\\mathbf{c}$ it is trying to predict.\\u201d\\n\\nc) \\u201cSome parts of the formal description are quite difficult to follow, for instance, the section on \\\"Cycle Reconstruction Loss\\\".\\u201d\\nWe thank the reviewer for the constructive comment. We have since added Figure 2 with an example to improve the understanding of cycle reconstruction loss and better contrast it with self reconstruction.\\n\\n\\n\\nd) \\u201cthe central objective of a text \\\"imbued\\\" (i.e. \\\"influenced\\\") by a conditioning text is left pretty informal.\\u201d\\n\\nWe thank the reviewer for pointing this out. We have replaced the word \\u201cimbued\\u201d to \\u201cconditioned on\\u201d which is more oftenly used in the literature.\\n\\ne) \\u201cAdapter Layers are a technique for adapting pretrained models which does not require retraining the entire model; they are therefore similar in spirit to the CoCon block, and should be cited\\u201d\\n\\nWe thank the reviewer for bringing this relevant work up for discussion. While adapter layers [1] have been previously proposed to also save on model size and training resources for multilingual translation its training differs from CoCon\\u2019s self-supervised learning in that it relies on supervised training for a different task of machine translation, using annotated sentence pairs of different languages. CoCon\\u2019s core contribution is the use of self-supervised learning objectives such as self and cycle reconstruction to facilitate its training for conditioned text generation. We have added the following sentence in the \\u201cRelated Work\\u201d section to cite the work and discuss its difference from CoCon:\\n\\n\\u201cSmall adapter layers [1] have been previously proposed for multilingual translation to also save on model size and training resources but differ from CoCon's self-supervised training as they rely on annotated sentence pairs of different languages for supervised training.\\u201d\"}",
"{\"title\": \"Response to Review 3 (Part 2)\", \"comment\": \"f) \\u201cThere are missing details on how the textual contexts are selected during inference time.\\u201d\\n\\nWe thank the reviewer for bringing this up for clarification. The CoCon content inputs 'is perfect' and 'is horrible' are positive and negative sentiment attribute markers [3]. Sentiment attribute markers are essentially n-grams that appear in high frequency in text samples annotated with a particular attribute such as positive/negative sentiment. While we use one sentiment marker each for evaluation in the main paper, we also included generation from other positive/negative sentiment markers in Table 12 for more examples. The topic content inputs 'computers', 'politician', 'religion' and 'scientist' mirrors the CTRL\\u2019s control codes [4]. We have added these details in the revision\\u2019s \\u201cSetup\\u201d in Section 4.3: \\n\\n\\u201cSentiment attribute markers [3] 'is perfect' and 'is horrible' are used as content inputs to generated CoCon outputs for the Positive and Negative sentiment respectively. Sentiment attribute markers are n-grams that appear in high frequency in text samples annotated with a particular attribute such as positive/negative sentiment.\\u201d\\n\\n\\n\\ng) \\u201cOne advantage of using textual control tokens is handling unseen \\\"content inputs\\\" at test time. This should have been evaluated to show the superiority of this solution.\\u201d\\n\\nWe thank the reviewer for the helpful suggestion. We have since added more experiments to show the superiority of CoCon in controlling the text generation to unseen \\u201ccontent inputs\\u201d by using GPT-2 output text as the content input on top of topic/sentiment content inputs to show the flexibility of CoCon to control both high-level attributes and content of the generation. This dual content input generations from CoCon are labeled as CoCon+ in our revised manuscript. Table 6 and Table 7 show CoCon+ \\u2018s effectiveness in controlling the text\\u2019s content versus other baselines when evaluated with human and automatic metrics respectively. The higher content similarity to the additional GPT-2 passage content input shows that CoCon+ can generate text that is more similar to unseen content inputs (GPT-2 passage) than other controlled text generation methods even though these methods share similar prompt text and target attributes.\\n\\nh) \\u201cIs that a typo in figure 1? The cocon layer output representation should be ..\\u201d\\n\\nWe thank the reviewer for point this out for clarification. That is not a typo: $\\\\tilde{\\\\mathbf{o_t}}$ from Equation (7), represents the logit of the first CoCon generated token where the prompt text\\u2019s original hidden state ($\\\\mathbf{h_{:t-2}}$) is concatenated to the CoCon state $\\\\mathbf{h_{t-1}}'$. The original hidden states of the prompt text are used as the prompt text is not an output of the CoCon.\\n\\n\\n\\nWe would like to thank the reviewer again for the very helpful and thoughtful feedback for us to improve the paper.\\n\\n[1] Zero-Shot Question Generation from Knowledge Graphs for Unseen Predicates and Entity Types, NAACL2018\\n[2] Unsupervised text style transfer using language models as discriminators. NeurIPS 18\\n[3] Delete, retrieve, generate: A simple approach to sentiment and style transfer. NAACL 18\\n[4] Ctrl: A conditional transformer language model for controllable generation.\"}",
"{\"title\": \"Response to Review 3 (Part 1)\", \"comment\": \"We thank the reviewer for the thoughtful and helpful comments. Please refer to the following for our response (in 2 parts):\\n\\na) \\u201cConditioning NLG models on textual contexts to influence the generated text is a straight forward solution and makes sense in terms of flexibility, and in fact, has been used before [1] to enhance faithfulness in QG tasks.\\u201d:\\n\\nWhile conditioning neural language generation on textual context has been used before, CoCon is the first to learn zero-shot conditioned language generation for large language models (LMs) in a self-supervised manner. [1] enhances faithfulness in question generation by attending to textual context such as predicates, subject types or object types rather than the content input used here in CoCon. We have added this discussion in Section 2 \\u201cRelated Work\\u201d of the revised manuscript. Given how remarkable current larget transformer LMs are in text generation, we believe it is timely that CoCon can extend the LMs\\u2019 potential to even more applications through better control of its generation.\\n\\n\\nb) \\u201cthis formulation is not suitable for other types of control where textual contexts are hard to formulate, such as \\\"removing\\\" toxicity, controlling the length.\\u201d\\n\\nIndeed. However, CoCon\\u2019s main aim is to exercise fine-grained control over LM\\u2019s generations with the flexible medium of content input. This makes CoCon a complementary and orthogonal tool to other types of controlled generation methods as shown in \\u2018Complementary Text Control\\u2019 of Section 4.4.\\n\\nc) \\u201cThere might be an issue with the Adversarial loss, being non-differentiable (see Q1 below)\\u201d\\n\\nWe thank the reviewer for point this out for clarification. Similar to previous work on adversarial learning for text generation [2], through continuous approximation of discrete sampling of $y$, CoCon and $f_{\\\\text{disc}}$ can be trained with backpropagation in an end-to-end manner. We have added this detail in our revision under \\u201cAdversarial Loss\\u201d in Section 3.1:\\n\\u201cThrough continuous approximation of discrete sampling of $y$, CoCon and $f_{\\\\text{disc}}$ can be trained with backpropagation in an end-to-end manner.\\u201d\\n\\n\\nd) \\u201cEvaluation could have been more thorough\\u201d\\nWe thank the reviewer for the constructive suggestion. We have since added more results to show CoCon\\u2019s superiority in controlling the generation with unseen \\\"content inputs\\\" as shown in Table 6 and Table 7. Please refer to the detailed discussion in the response (g) below.\\n\\nWe have also added human evaluation (Table 5 and 6) to better evaluate CoCon\\u2019s effectiveness in controlling high-level attributes such as sentiments and topics as well as fluency perceived by humans. The human evaluation corroborates the results from automatic evaluation where CoCon displays better control over topic- and sentiment-relevant generations than other controlled generation baselines, albeit with a slight tradeoff in fluency.\\n\\n\\ne) \\u201cEnhancing topic relevance of PPLM and CTRL could be achieved by reducing the temperature during decoding, on the expense of Perplexity as well \\u2026 While this has been an issue in previous work as well, it would have been better to fix this issue and provide as better evaluation, a good method to evaluate this could have been plotting perplexity vs control satisfaction rate under a temperature sweep.\\u201d\\n\\nWe thank the reviewer for the suggestion. We have added more results that could better address this point. When adding an additional (second) content input to CoCon on top of the target topic/sentiment (first) content input (named CoCon+ in the revised manuscript), we observe a tradeoff in the topic/sentiment conditioned generation (CoCon vs CoCon+ in Table 6). While CoCon+ generation shows higher content similar to the additional (second) content input than CoCon, its generations displayed lower topic/sentiment relevance. We have edited the third core contribution in the Introduction to better reflect this point without claiming that CoCon outperforms PPLM and CTRL in topic/sentiment control: \\u201cThrough ablation studies and comparisons with strong baselines like PPLM and CTRL, we investigate how CoCon controls high-level attributes such as topic and sentiment while generating texts that have high content similarity to conditioning text.\\u201d\\n\\nWe would like to also point out that while baseline methods like PPLM and CTRL can also control high-level attributes like topics and sentiment, CoCon offers additional fine-grained control and flexibility over text generations by conditioning on unseen \\\"content inputs\\\", as discussed further in response (g) below. Moreover, CoCon can be trained in a self-supervised manner, relieving the burden of data annotation involved in those previous methods.\"}",
"{\"title\": \"An architectural modification allowing integrating of textual contexts, converting the problem of controlled text generation into conditional text Generation.\", \"review\": \"This paper tackles the problem of controlled text generation by converting it into a conditional text generation similar to (Keskar et al.19). It proposes an architectural modification to the transformer LM used in GPT2, Specifically, a CoCon layer is added as a separate transformer block in the middle allowing self-attention to be performed between the textual context representations LM_\\u03b1(c) and the generations LM_\\u03b1(x_{:t-1}) this is performed through concatenating the key and value matrices with the keys and values of the encoded textual context. Authors provide 4 different losses to train this additional layer.\", \"pros\": \"- The proposed method has an advantage over (Keskar et al.19) by \\n1) avoiding rigid control tokens and replacing them by textual context. \\n2) avoiding to retrain the whole LM architecture and replacing this by retraining single transformer block instead \\n3) allowing several control contexts at once (this is an interesting aspect of the proposed solution)\", \"cons\": [\"The proposed solution to controlled NLG is simple yet not inspiring nor revolutionary, simplicity could have been an advantage here ofc, if it tackled the problem providing a concrete method to control NLG models. however, this is not the case here (See next)\", \"Conditioning NLG models on textual contexts to influence the generated text is a straight forward solution and makes sense in terms of flexibility, and in fact, has been used before [1] to enhance faithfulness in QG tasks. On the other hand, conditional text generation as a solution to controlled NLG might be effective for influencing topic or sentiment, however, this formulation is not suitable for other types of control where textual contexts are hard to formulate, such as \\\"removing\\\" toxicity, controlling the length.\", \"There might be an issue with the Adversarial loss, being non-differentiable (see Q1 below)\", \"Evaluation could have been more thorough, specifically, when the proposed method has a superior topic and sentiment relevance in table 3 and table 4 this comes on the cost of perplexity. Enhancing topic relevance of PPLMand CTRL could be achieved by reducing the temperature during decoding, on the expense of Perplexity as well. This is similar to the Quality/Diversity tradeoff showed in [2]. While this has been an issue in previous work as well, it would have been better to fix this issue and provide as better evaluation, a good method to evaluate this could have been plotting perplexity vs control satisfaction rate under a temperature sweep.\", \"There are missing details on how the textual contexts are selected during inference time. In most of the cases, they're handcrafted topic names or short sentences \\\"is perfect\\\". This makes the proposed solution very similar to control tokens by Keskar et al. 19. One advantage of using textual control tokens is handling unseen \\\"content inputs\\\" at test time. This should have been evaluated to show the superiority of this solution.\"], \"questions\": \"\", \"q1\": \"This is a critical one, If I got this part correctly, the adversarial loss eq 18, requires to sample y from the LM, this is non-differentiable. if that is the case, did you follow any necessary steps (e.g. RL or continuous approx) to overcome this non-differentiability?\", \"refs\": \"1- Zero-Shot Question Generation from Knowledge Graphs for Unseen Predicates and Entity Types, NAACL2018\\n\\n2- LANGUAGE GANS FALLING SHORT ICLR2020\", \"minor\": [\"Is that a typo in figure 1? The cocon layer output representation should be h_1 , h_{t-2} should be h' _ 1 , h' _ {t-2}\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Review: CoCon: A Self-Supervised Approach for Controlled Text Generation\", \"review\": \"SUMMARY:\\n\\nThe paper proposes a self-supervised technique for controlling the productions of a Transformer-based pretrained generator. The technique consists in augmenting the architecture of the pretrained model with a special \\\"content-conditioner\\\" (CoCon) block which is able to exploit a contextual condition. \\nAt training time, this contextual condition is obtained, in a self-supervised way, by removing a textual portion from a training text and using this portion as the contextual condition, and then the parameters of the CoCon component learn how to approximately recover the missing portion based on this context (the portion itself) and on the prefix text preceding the removal.\\nAt test time, a textual condition is provided as context, and the trained model produces a text \\\"imbued\\\" (authors' terminology = influenced) by this condition.\", \"positives\": \"While self-supervised learning has been employed for certain text generation tasks, such as summarization, I am not aware of previous works directly concerned with self-supervision for controlled open-ended text generation. This appears to be a very worthwhile direction to pursue.\", \"issues_and_questions\": \"*Clarity*. The main idea is actually pretty simple but the reader has to wait until the end of page 4 (Self Reconstruction Loss) to be able to understand it (true: it was exposed in the intro, but in a way difficult to understand on a first reading), and is a bit drowned in a dense mass of mathematical notations that do not help. Some parts of the formal description are quite difficult to follow, for instance the section on \\\"Cycle Reconstruction Loss\\\". Also, in Fig. 1, the reader does not immediately see that (*IF* I understand correctly) the hidden states $h_{t-1},...,h_{l-1}$ are masked, which does not help in understanding an already dense formal description. Perhaps most serious: the central objective of a text \\\"imbued\\\" (i.e. \\\"influenced\\\") by a conditioning text is left pretty informal.\\n\\n*Related work and Alternatives to the CoCon block*. Adapter Layers (https://www.aclweb.org/anthology/D19-1165/) are a technique for adapting pretrained models which does not require retraining the entire model; they are therefore similar in spirit to the CoCon block, and *should* be cited, with differences highlighted. (More minor: it might (?) also be worthwhile to mention a different option: using an encoder-decoder model (similar to NMT) where the conditioning context would just be the \\\"source\\\" and the generated text the \\\"target\\\", directly providing an attention-driven mechanism --- however the issue of retraining the whole model would then need to be addressed)).\\n\\n*Complexity of the overall model, intuition about the different losses, hyperparameters* The self-reconstruction loss, by itself, appears to be problematic. Indeed, a model trained only on this loss might just learn to *copy* the conditioning text, thus destroying fluency and generalization. This should be explicitely discussed (instead of leaving the point more or less implicit in the second paragraph of section 4.4: \\\"... limit where the texy appears incomprehensible\\\"). Therefore the need to interpolate this loss with other losses (section 3.1). While you provide some ablation experiments, you do not much discuss the importance of these different losses. In particular, you should give more intuition/motivation for the Cycle Reconstruction Loss, which I did not really understand. The overall model involves quite a few hyperparameters ($\\\\lambda$'s in equation (20), $\\\\tau_{content}$.\\n\\n*Results* The results are difficult to interpret, in particular due to the not very clearly formalized control objective (do you want the generated text to contain literal parts of the conditioning text (apparently not), or to have some semantic simlarity with the conditioning text (apparently yes, but you do not explicitly mention or define semantic similarity)? It is difficult for the reader to really assess the quality of the results. Here, a human evaluation with a clear evaluation protocol would really be useful.\\n\\n\\nOverall, an interesting and important objective: self-supervision of controlled text generation, with some nice ideas. But serous flaws in presentation and experimental validation.\\n\\n------- \\n**Written after rebuttal:**\\nThank you for the substantial improvements to the paper in terms of clarity (in particular Figure 2 is helpful) and additional experiments/human evaluations. Despite some underlying questions (from me and other reviewers) that remain, I have updated my score and am now leaning towards acceptance.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"The paper proposed a novel way of controlling language model output.\", \"review\": \"The paper proposed a way to control the content output of a DNN-based language model (GPT-2 in the experiment, but not limited to it). It places an layer (CoCon) that can take an arbitrary phrase as the hint after generating the embedding but before generating the text. Experiments showed that the control is effective at directing the generated text. Examples confirmed that too.\", \"quality\": \"The design of the CoCon layer is intuitive. The authors clearly explained the rationale behind the design of the layer. Experiments are based on strong baseline (GPT-2, PPLM and CTRL), and show clear advantage of the model.\", \"clarity\": \"The writing is clear and easy to follow. I have some minor comments but believe they are easily fixable.\", \"originality\": \"CoCon has clear but incremental difference than PPLM and CTRL.\", \"significance\": \"Controlling the generation of LM is not a novel task. This is an improvement on an existing problem with several solutions. Moderate originality.\", \"my_questions_and_suggestions\": \"1) Page 2, core contribution, item 3: what does \\\"competitively\\\" mean here?\\n\\n2) Page 2, Related Work, first paragraph. \\\"our approach aims to control the generation at a content level, beyond high-level\\ntext attributes.\\\" should it be \\\"at the content level\\\"?\\n\\n3) Page 5. cycle reconstruction loss. It would be helpful to give an example, otherwise it's a bit hard to see how cycle recon could have helped.\", \"same_line\": \"\\\"unlikely co-occurs\\\" -> \\\"unlikely to co-occur\\\" ?\\n\\n4) Page 6 , 2nd paragraph \\\"self-supervised and requires no manually labeled data fully\\\" is duplicated, can be removed.\\n\\n5) Overall speaking, the choice of content input for all examples are weird. Why do we use partial phrases without a clear meaning or subject as the content hint?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Useful method with promising results, but evaluation could be better\", \"review\": [\"**Thanks to the authors for the response. The addition of a human study and CoCon+ has made the paper substantially stronger, as it resolves most of my concerns. The authors provided plausible explanations for the remaining questions. The paper should now be considered a clear accept.**\", \"The paper proposes a method for controlled text generation with pretrained (unconditional) language models. The method trains a relatively small network (CoCon) that is injected between any two successive layers of a pretrained Transformer language model. Given a prefix, CoCon is trained to output an input to the next layer such that the remainder of the text is generated at the output layer. CoCon is a function not only of the prefix but also of some desired 'content' sequence, which allows to control the content of the output text at inference time. Several auxiliary loss terms are employed to improve generalization. The model is evaluated on its ability to generate output with desired content, to generate text of a desired topic, and to generate text of specific sentiment.\", \"Compared to previously proposed Plug and Play Language Models, the novelty of the CoCon method lies in its ability to condition on entire input sequences (instead of only bag-of-words), and the fact that it does not require style labels, which are both important properties. The paper is well written, the method is intuitive and all components are well motivated. The experimental section could be more thorough. For example, several aspects of the model are only evaluated qualitatively, and I don't find the examples very convincing. Moreover, some of the results are difficult to interprete or non-conclusive. The paper could benefit from a human evaluation.\", \"In the papers current state I would already slightly lean towards acceptance because the method itself will be useful to the community, and some of the results are promising. I am willing to strengthen my recommendation if my questions below are answered positively.\", \"In the _CoCon Setup_ you report to split $x$ into $x^a$ and $x^b$ somewhere between the 8th and 12th BPE position. Why is this sufficient? Wouldn't we expect the model to perform poorly on prefixes that are not between 8 and 12 BPE tokens long?\", \"Table 2 suggests that CoCon without the adversarial loss achieves the best performance, drastically improving on content similarity while retaining comparable text quality and diversity. This makes me wonder why the adversarial term was introduced in the first place, and why it is apparently used in the other two experiments.\", \"Why is the perplexity of CoCon (and PPLM) consistently lower than the perplexity of the baseline LM GPT-2? Shouldn't we expect a trade-off between controllability and text quality? In the PPLM paper, the perplexity of the baseline is consistently (slightly) lower than that of PPLM.\", \"Why does training on (human-written) Webtext instead of (machine-written) outputs of GPT-2 _decrease_ the text quality? Wouldn't we expect the opposite?\", \"The above three questions lead me to believe that using perplexity on GPT-1 might not be a suitable metric to judge text quality in these scenarios. Could you please provide more arguments why you believe a human study is not needed here?\"], \"suggestions\": \"The model extensions listed under 4.4 are very interesting. The paper would be even stronger if you had quantitative experiments for these, as the examples that are given are not very convincing. For example, couldn't you apply your model to (gradual) sentiment transfer by conditioning CoCon on the input text as well as the target sentiment (\\\"is perfect\\\"), weighted by $\\\\tau_{content}$? Even if the results were not very good compared to the state-of-the-art in sentiment transfer, such an experiment could show off the versatility of CoCon compared to PPLM. Moreover, if PPLM and CoCon complement each other as you claim, why not add another row \\\"CoCon + PPLM\\\" to Table 3, 4, and 5?\", \"minor_suggestions\": [\"In the results section of 4.1, you say that L_adv marginally reduces CoCon's perplexity, but the table shows that _removing_ L_adv reduces it.\"], \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
0p-aRvcVs-U | $\alpha$VIL: Learning to Leverage Auxiliary Tasks for Multitask Learning | [
"Rafael Kourdis",
"Gabriel Gordon-Hall",
"Philip John Gorinski"
] | Multitask Learning is a Machine Learning paradigm that aims to train a range of (usually related) tasks with the help of a shared model. While the goal is often to improve the joint performance of all training tasks, another approach is to focus on the performance of a specific target task, while treating the remaining ones as auxiliary data from which to possibly leverage positive transfer towards the target during training. In such settings, it becomes important to estimate the positive or negative influence auxiliary tasks will have on the target. While many ways have been proposed to estimate task weights before or during training they typically rely on heuristics or extensive search of the weighting space. We propose a novel method called $\alpha$-Variable Importance Learning ($\alpha$VIL) that is able to adjust task weights dynamically during model training, by making direct use of task-specific updates of the underlying model's parameters between training epochs. Experiments indicate that $\alpha$VIL is able to outperform other Multitask Learning approaches in a variety of settings. To our knowledge, this is the first attempt at making direct use of model updates for task weight estimation. | [
"multitask learning",
"meta-optimization",
"deep learning"
] | Reject | https://openreview.net/pdf?id=0p-aRvcVs-U | https://openreview.net/forum?id=0p-aRvcVs-U | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"HCKOxAUQdlb",
"oFi-R_p3Tz1",
"mjBGRH9S9Dy",
"chg0_tXsrUA",
"ooUtIp6cKAf",
"y6RkFNGKTm",
"he0-t4t4_YF",
"33OOEexVtIv"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040513320,
1606226828218,
1606226789131,
1606226711512,
1606226650784,
1603961058665,
1603937932268,
1603823484555
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3543/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3543/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3543/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3543/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3543/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3543/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3543/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This paper proposes \\\\alphaVIL, a method for weighting the task-specific losses in a multi-task setting in order to optimize the performance on a particular target task. The idea is to first collect gradient updates for the model based on all the separate tasks, and then re-weight those updates in order to optimize the loss on a held-out development set for the target task. In practice, this meta-optimization is performed with gradient descent. Experiments on multi-MNIST and several tasks that are part of GLUE and SuperGLUE show that \\\\alphaVIL is close in performance to a baseline multitask method and discriminative importance weighting.\", \"strengths\": [\"The idea is intuitively appealing. Directly reweighting tasks as a meta-optimization step is straightforward and appears to not be proposed previously in the literature.\", \"The paper is clear in its presentation.\"], \"weaknesses\": [\"The reviewers agree that the main weakness is that the experimental results do not show that \\\\alphaVIL offers any substantial benefits over existing methods. On the multi-MNIST task, while \\\\alphaVIL tends to have the highest mean performance, the difference is small (less than a standard deviation). On the GLUE/SuperGLUE tasks, it outperforms other methods on only 1 out of 10 experiments. There are also no confidence intervals/standard deviations provided to assess the significance of the results.\"]}",
"{\"title\": \"Review-specific Response\", \"comment\": \"We would like to thank you, your comments are very helpful in our efforts to improve our work.\\nPlease refer to our general comment, where we believe we answered the points raised in \\\"Cons\\\" and \\\"Overall\\\".\", \"for_specific_questions\": \"The task weights are there to 'accumulate' in time the relative task weighing that has been calculated using the alpha optimization. The intuition is that if a task's change in the model has been always applied with a weight < 1, we probably want to collect future changes by this task with a scale in its gradients that reflects this downweighting. However, it is correct that it's possible for a single metaparameter to decide at the end how to weight a task delta without the need for extra scaling during collection. We have not done any experiments to compare the two methods yet but it'd be a good addition for an updated version of the paper.\\n\\nWe chose MultiMNIST for two reasons. First, we knew that multitask learning for this dataset helps, as this had been established in prior work (in particular, the MultiMNIST paper itself). Second, as the tasks overlap in terms of their classification space, but concentrate on different parts of the image, we could reasonably expect the task weights to eventually go towards 1.0 and 0.0 for the main and auxiliary tasks respectively (see also Figure 3, which shows they actually do). This would add as a sort of sanity check. We should definitely have made this point clearer in the paper, and will do so in the updated version.\\n\\nEnsembles were chosen for the test set as GLUE and superGLUE only allow a very limited number of submissions to be tested, leaving us the choice between ensemble or single-best models per method. While we believe that average scores per methods are more meaningful, we agree that we should also include ensemble results for the development set to be consistent with the test setting.\"}",
"{\"title\": \"Review-specific Response\", \"comment\": \"Thank you for your helpful comments.\\n\\nIn addition to our general response, we would like to answer your specific questions here.\\n\\n\\\"For example, in line 11 of Algorithm 1, I don\\u2019t understand the intuition arbitrarily subtracting 1 from all weights.\\\" -- We see how this is unclear and should be pointed out in the text. We are not subtracting 1 from all weights, but from the newly found alpha parameters. Alpha parameters tell us which way to adjust the weights, i.e., whether to increase or decrease a task's importance. Since alphas are initialised to 1 before being optimised to weigh the tasks (lines 8--10, Algorithm 1), an alpha-value >1 entails that the corresponding task importance weight should be greater, while an optimized alpha < 1 indicates that the task should be down-weighted. Accordingly in line 11 of Algorithm 1, the term (\\\\alpha - 1) will be positive if alpha > 1 and the new weight according to w+(\\\\alpha-1) will be increased. Conversely, if alpha < 1, the overall term will be negative, and thus substracted from the task weight, decreasing it.\\n\\nOn the novelty of our method, while dynamic task weighting is not new, we believe that determining task weights through direct metaoptimization is. While DIW aims to optimize the weights with a numerical estimate of the gradient, we see our work as a more general framework which is compatible with any optimization method (e.g. Adam). We see how this could seem similar to meta-learning algorithms like MAML or Reptile. However, the metalearning objective is different i.e., instead of searching for a model initialization that can be used for rapid finetuning, we are looking for task weights during training, skipping this step. Other Meta-weight approaches like Shu et al. (2019) do adjust sample-specific weights for a single task during training by learning a complex weighting function. This differs from our approach as we are looking at data accross different tasks rather than within the same task, and our weighting is a very simple interpolation step of different task-specific updates.\\n\\nAdditional notes/questions:\\n\\n1. In NLU, for standard multitask, due to the fact that task datasets are not balanced, we sample 25% of all data for each task and train with this to calculate a task delta.\\n\\n2. While the weights eventually go to 1.0 and 0.0 respectively, we observed in Figure 3 that they do so gradually over the course of training to about epoch 25. We conjecture that initially, there is at least some benefit conveyed by the auxiliary task, which has implications for the final trained model.\\n\\n3. In part, MultiMNIST was meant to provide this sanity check, in combination with Figure 3. We should have made this more clear in the text. We tried more sanity checks, e.g., splitting the datasets into parts and looking to see if the algorithm will pick the same task's splits (that are guaranteed to positively transfer), but we were short of space in the paper. \\n\\n4. We can add standard deviations for NLU in the camera ready version however, for the submitted version they are collected over only 4 random seeds, so are less meaningful than for MNIST where they are based on a much larger 20 runs.\"}",
"{\"title\": \"Review-specific Response\", \"comment\": \"Thank you for your feedback and suggestions. We hope we have replied to your concerns in our general response.\\n\\nWe would just like to add that our baselines include standard MT learning, and a single-task oriented approach (each task trained in isolation) as pointed out, as well as Discriminative Importance Weighting. DIW is a very competitive and strong target-task oriented MT algorithm, and in its formulation close to aVIL, with the difference of aVIL tuning task-specific weights through additional optimization. The experiments show aVIL on average outperforms DIW on the tested domains, and in particular, does seem to suffer less from overfitting on the development set(s).\"}",
"{\"title\": \"General Response to Reviewers\", \"comment\": \"We would like to thank the reviewers for their genuinely helpful comments. We are very glad there is consensus that the paper is well written and easy to understand.\\n\\nWe will address issues raised by multiple reviewers here, and reviewer-specific questions in their own comments.\\n\\nThere was a general concern that results are relatively weak and only marginally improve over the compared methods, in particular Discriminative Importance Weighting. We agree that the numbers leave this impression especially in the tested NLP domain.\\nTo put the numbers in perspective, we would like to point out that improvements yielded by multitask learning on NLU tasks are often small. This phenomenon is also observed in other multitask settings, for example in the survey of [1] where MT leads to very mixed results.\\nLooking at SuperGLUE (of which we used a subset of tasks), the overall average scores of the RoBERTa_large model in single and multitask setup differ by only 1.1 points (due to computational constraints we used the smaller _base variant in this work). Furthermore, on CommitmentBank, CoPA, and RTE the accuracy differences yielded by multitask training are +0.4, +0.6, and -0.1. This goes to show that in general achieving a substantial accuracy improvement is tough with the given model on these tasks.\\n\\nOn the other hand, on the more 'artificial' task of MultiMNIST (less noise, larger, cleaner and more consistent data), aVIL consistently outperforms the compared methods wrt. mean performance, including the very strong DIW which is close in its formulation to aVIL, but relies on a more aggressive weight tuning approach.\\nOn the same note, we believe that one advantage of aVIL over DIW is that is less prone to this overfitting on the development data, as we briefly point in the last paragraph of Section 4.\\n\\nAnother common criticism is a lack of theoretical motivation/justification of the proposed algorithm. We have to concede that the work at present is lacking in this regard, and intend to remedy this for the camera-ready version.\\n\\nThe final common suggestion between reviews was the addition of more experiments and/or ablation studies to show the efficacy of our method. As we were hard-pressed to fit the algorithm along with the existing MultiMNIST and NLP experiments into the page limits, we had to cut out additional experiments and analysis. We will add this to the main text of a camera-ready version, space permitting, or add respective appendices.\\n\\n\\n[1] Vandenhende et al. (2020) \\\"Multi-Task Learning for Dense Prediction Tasks: A Survey\\\"\"}",
"{\"title\": \"The proposed methodology is intuitive and flexible, the experimental results are not convincing enough.\", \"review\": \"This paper proposes a novel multi-task learning method which adjusts task weights dynamically during training, by exploiting task-specific updates of the model parameters between training epochs. Specifically, the proposed model takes the differences between the model\\u2019s parameters before and after the singletask update, after that the mixing factors of the model updates are found based on the differences to minimize the loss on the target task\\u2019s development data. Empirical studies are performed on tasks of computer vision and natural language understanding.\\n\\nThe paper is well written and easy to follow, the authors summarize the related work in a clear manner. The proposed methodology is intuitive and well-motivated, in the meantime, it is flexible and can be generalized to other variations in terms of models and tasks.\", \"my_major_concern_about_the_paper_include_the_following\": \"1)\\tAlthough the proposed method is intuitive and straightforward, it would be necessary to provide some theoretical justification or a formal analysis of the proposed methodology.\\n\\n2)\\tConsidering the lack of a theoretical justification, the experimental results are not convincing enough to justify the proposed method. The baselines chosen include standard multi-task learning and one single task-oriented approach, which is somewhat limited. Even so, on both of the computer vision and natural language understanding tasks, the proposed method doesn\\u2019t consistently outperform the baselines in most cases. The authors did provide sufficient analysis, nevertheless, it doesn\\u2019t justify the effectiveness of the algorithm. \\n\\nBased on the concerns above, the paper can be improved from both the theoretical and empirical perspectives.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Ad hoc multitask learning via task weighting with unconvincing experiments\", \"review\": \"Summary: This paper presents an algorithm for multitask learning that learns task weights via an EM-like approach that alternates between updating the model parameters (using task weights) and updating the task weights (using current model parameters, based on the target task development set).\", \"experiments\": \"They compare against single task training, standard multitask training (though this isn\\u2019t described very clearly but roughly is training jointly on tasks), and another method for learning task weights, Discriminative Importance Weighting (DIW).\\nThey present experiments on MultiMNIST, where the two tasks are to predict the task in the top left and bottom right of two superimposed digits. The proposed algorithm has the beast mean performance, but results are within a standard deviation of the baselines. They also present experiments on 5 NLU tasks (CommitmentBank, COPA, MRPC, RTE, WNLI) with the same baselines. The results on these tasks are mixed, with all multitask methods outperforming single task training (except on WNLI, which is a bit degenerate).\\n\\nOverall, this paper needs a bit more work. The proposed is quite ad hoc, and with little justification, it\\u2019s not clear why we should be doing any of the things the algorithm proposes. For example, in line 11 of Algorithm 1, I don\\u2019t understand the intuition arbitrarily subtracting 1 from all weights. From a novelty perspective, I\\u2019m not convinced the proposed method is different enough from existing methods. Dynamic task weighting is not particularly new (e.g. the baseline method, DIW, they compare against), and their method starts to look a lot like meta-learning of task weights (like MAML [1], [2], or [3]). The results from the experiments are not convincing to me. On MNIST, the results between all methods are fairly close together, and on the NLU tasks, there\\u2019s no clear best algorithm.\\n\\nAdditional notes and questions\\n1. For the \\u201cstandard multitask baseline\\u201d, are the tasks balanced in size? Do you deterministically train on a batch from both or is it stochastic? This is mostly relevant for the NLU tasks, which have fairly different sizes.\\n2. On MNIST, given that the algorithm sets the weight of one task to 1.0 and the other to 0.0, why is this algorithm outperforming single-task training?\\n3. On a similar note, it'd be nice to see a sanity check experiment that the learned weights are sensible (e.g. one task has random labels) or an examples of where the learned weights are binary.\\n4. I appreciate that the authors report min/max/mean/std of 20 runs on MNIST. It would be nice to see the standard deviations for the NLU tasks for consistency and given the fact that the standard deviations on the MNIST task were important in differentiating significant differences. Similarly, it would be nice to see how the task weights evolve.\\n\\nStyle notes\\n* Huggingface Transformers now has a citation\\n* Multitask Learning; Computer Vision; Natural Language Processing/Understanding: lowercase\\n* \\u201cSingletask\\u201d should probably be hyphenated or two words.\\n* \\u201c10.000\\u201d \\u2192 \\u201c10,000\\u201d\\n* Table 2 could really use headers over the two columns within each task.\\n\\n[1] Finn, Chelsea, Pieter Abbeel, and Sergey Levine. \\\"Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks.\\\" ICML. 2017.\\n[2] Shu, Jun, et al. \\\"Meta-weight-net: Learning an explicit mapping for sample weighting.\\\" Advances in Neural Information Processing Systems. 2019.\\n[3] Wang, Xinyi, Yulia Tsvetkov, and Graham Neubig. \\\"Balancing training for multilingual neural machine translation.\\\" arXiv preprint 2004.06748 (2020).\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Multi-task learning with gradient-based meta-optimization for learning task-specific weights\", \"review\": \"Summary:\\nThis paper proposes a new approach for multi-task learning that estimates the individual task weights through gradient-based meta-optimization on a weighted accumulation of task-specific model updates. Evaluations are performed in a multi-task learning setup on tasks related to computer vision (Multi-MNIST) and natural language understanding (tasks from GLUE and SuperGLUE).\", \"pros\": \"1) The paper is easy to follow. \\n\\n2) Empirical evaluation is performed on vision and NLU domains.\", \"cons\": \"1) I am not completely convinced with the proposed alpha-Variable Importance Learning algorithm. It is not very clear in the discussion how the alpha is different from task-specific weights. For example, in algorithm-1, if you replace deltas in line-10 with line-6, then there is no need to have separate alphas and task-specific weights, where line-9 can calculate the task-specific weights directly. \\n\\n2) In general, for a multi-task setup, I would expect to show the multi-task learning with multiple auxiliary tasks (that\\u2019s the main motivation of this paper as well). However, the choice of the experimental setup is convincing, especially for the vision domain there is only one auxiliary task. \\n\\n3) Both results in Table-1 and Table-2 suggest that the proposed algorithm is not superior over the baselines and previous approaches. The improvements are minor and sometimes lower, and I believe most of the results fall within the statistically insignificant range.\", \"overall\": \"I think the paper can be made stronger with more thorough discussion on the algorithm and its properties. Further, the experimental results suggest that the proposed algorithm performs more or less similar to previous methods. Hence, there is a lot of scope for further improvement and I would suggest rejecting this paper. I would also suggest the authors to perform more experiments and ablations.\", \"questions\": \"1) How is the alpha different from task-specific weights. Please discuss more on this. In algorithm-1, if you replace deltas in line-10 with line-6, then there is no need to have separate alphas and task-specific weights?\\n\\n\\n2) Please provide statistical significant scores for all the results. \\n\\n3) What's the reason behind choosing a multi-MNIST dataset with only one auxiliary task? Aren\\u2019t there other datasets in a MTL setup with more auxiliary tasks? \\n\\n4) Table-2 results for the development set are based on the average of multiple runs, but for test you reported the ensemble, so why don\\u2019t you report ensemble for the development set as well?\\n\\n5) Can you also present some ablations/discussion on the learned importance of an auxiliary task (based on task-specific weight over the training trajectory) vs. any intuitive reason that makes sense of this importance of the axillary task for a given primary task? If there is not such correlation, then also it's good to discuss.\", \"other_comments\": \"1) Please try to expand the introduction section. \\n\\n2) Please provide some more ablations.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
ghKbryXRRAB | Tracking the progress of Language Models by extracting their underlying Knowledge Graphs | [
"Carlos Aspillaga",
"Marcelo Mendoza",
"Alvaro Soto"
] | The state of the art of language models, previously dominated by pre-trained word embeddings, is now being pushed forward by large pre-trained contextual representations. This success has driven growing interest to understand what these models encode inside their inner workings. Despite this, understanding their semantic skills has been elusive, often leading to unsuccessful, non-conclusive, or contradictory results among different works. In this work, we define a probing classifier that we use to extract the underlying knowledge graph of nine of the currently most influential language models, including word embeddings, context encoders, and text generators. This probe is based on concept relatedness, grounded on WordNet. Our results show that this knowledge is present in all the models, but has several inaccuracies. Furthermore, we show that the different pre-training strategies and architectures lead to different model biases. We conduct a systematic evaluation to discover specific factors that explain why some concepts are challenging for the different families of models. We hope our insights will motivate the future development of models that capture concepts more precisely. | [
"Language Models",
"NLP",
"Knowledge Graphs",
"Probe tasks",
"Word2Vec",
"GloVe",
"ELMo",
"BERT",
"RoBERTa",
"XLNet",
"ALBERT",
"T5",
"GPT2"
] | Reject | https://openreview.net/pdf?id=ghKbryXRRAB | https://openreview.net/forum?id=ghKbryXRRAB | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"4-XV2PdBjLV",
"Qvv_EGHRyxv",
"1LgJChLE-v",
"qK5NgrcnZdh",
"SZBn80U-Ouq",
"yARipHFQZzx",
"s0uV3QiXG7",
"0rFm6fgxSOq",
"puMyYXKiQV"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040457953,
1605939326414,
1605939199233,
1605938932171,
1605938600600,
1603973325270,
1603885829669,
1603871699020,
1603638458446
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3538/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3538/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3538/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3538/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3538/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3538/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3538/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3538/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This work addresses the problem of understanding how pre-trained language models are encoding semantic information, such as WordNet structure. This is evaluated by recreating the structure of WordNet from embeddings. The study also shows evidence about the limitations of current pre-trained language models, demonstrating that all of them have difficulties to encode specific concepts.\", \"pros\": [\"good idea to reveal how well the pre-training models encode the underlying knowledge graph\", \"detailed understanding on how language models incorporate semantic knowledge and where this knowledge might be located within the models\", \"experiments show that models coming from the same family are strongly correlated\", \"the paper shows how individual layers of the language models contribute to the underlying knowledge\", \"analysis of the different semantic factors (9 different factors, including number of senses, graph depth etc.)\", \"paper is clearly written and understandable and includes enough details to understand the implementation of the semantic probing classifier.\"], \"cons\": [\"weakly connected goals, response from reviewers is string around 3 main topics, which is seen as many for a single scientific paper. It would be easier to focus only on one topic and make a clear conclusion,\", \"single word concepts while CE models are powerful in context,\", \"lack of a profound analysis of the experimental results\", \"hard to understand which semantic category the pre-trained methods work well or not well,\", \"clarification about the improvement of the semantic learning abilities based on these results.\", \"Several of the identified issues have been answered in the author's rebuttal, however, the paper would still need more work to be accepted. Note also that the bar a this year ICLR conference is high and we encourage the authors to submit their updated work again at the next conference.\"]}",
"{\"title\": \"Please more feedback\", \"comment\": [\"We used pre-trained models provided by the original papers. We updated the paper to make this fact clearer. We also enhanced discussion and analysis in section 7 and appendix E regarding the low impact of pre-training corpus sizes.\", \"May we ask why you changed your Rating from 7 to 6? We would greatly appreciate if you could point out the elements that you consider that require more work. The suggestions of the other reviewers were already addressed, resulting in improved analysis and clarity in the message of the paper.\"]}",
"{\"title\": \"Extensive modifications. All suggestions addressed\", \"comment\": [\"The paper had extensive modifications to address reviewers suggestions, resulting in improved analysis and increased clarity. In particular, regarding your comments:\", \"We have conducted a rigorous analysis of our paper, reorganizing information and highlighting the most consistent findings in order to clarify the message of the work. In particular, we include a new Table that summarizes our main findings and includes links to the corresponding supporting evidence. We believe that this new version of the paper is more clear than the previous version. Thanks for the suggestions. They allowed us to improve our paper.\", \"Section 5 was modified to provide a deeper analysis to inspire new ideas on how to improve semantic abilities, following all reviewer suggestions. We now pay attention to the most consistent findings of the paper, providing actionable insights for researchers and practitioners who may want to improve the semantic abilities of their models. We also added confidence intervals to Figure 4. Thanks for pointing it out.\", \"We added Section 7, which includes further analysis and discussion on the implications of these findings, shedding light on how these findings can be used to improve semantic abilities. We also highlight potential applications of our findings.\", \"As suggested, we added to section 5 a new analysis at a category-level. Specifically, we include a new table that incorporates results for several semantic categories, indicating the performance of the different models. This analysis clarified the message of the section leading valuable insights. We also expand this analysis by including information in Appendix G.\"]}",
"{\"title\": \"Extensive modifications. Suggestions addressed\", \"comment\": [\"The probe is not restricted to non-contextualized single word concepts. We guess that we did not explain this point clearly enough in the previous version, thus we updated the document and Figure 1 to make it more clear. Our experiments take advantage of a dataset that contains annotations of the appearances of WordNet concepts in full sentences. Using this dataset, we can obtain the embedding of a target concept using a context-aware model. We just run the model over a sentence mentioning that particular concept. Then, we keep only the embedding that correspond to the token of the mentioned concept (or the first of them in the case of concepts longer than one token).\", \"We added an introduction below Section 3 (before 3.1) as suggested, along with suggestions from Reviewer4.\", \"We applied a linear transformation to M(x) and M(y) before the MLP mainly to standardize the dimensionality across the different models. We clarified this point in the paper. As you pointed out, we agree that concatenating M(x) and M(y) and applying a custom sized MLP is also a viable way to implement the probing classifier. We believe that it would lead to similar results.\", \"As suggested, Section 4.3 was removed and replaced by a quantitative analysis of models performance across semantic categories (please see Section 5 in the new version of the paper), leading to valuable insights that integrates well with the message of the rest of the paper.\", \"You mentioned that the paper seemed to have weakly connected goals. The paper is now organized around three main topics: 1) Ability of the models to encode the semantic information in Wordnet (Section 4); 2) Analysis of the model strengths and weaknesses to encode this knowledge (Section 5); and 3) Location where this knowledge is primarily encoded inside each model architecture (Section 6). We updated the whole paper and discussions to convey a more coherent message. In addition, we included a new section (please see Section 7) where we highlight the main findings of the paper. We also suggest how to transform these findings in actionable insights for future work.\"]}",
"{\"title\": \"All suggestions addressed\", \"comment\": [\"We included the suggested references on probe methods. We also improved the discussion on the soundness and limitations of probe methods that arise from these new references (please see the first paragraphs in Section 3 of the new version of the paper).\", \"We fixed narrative and included missing definitions as suggested (e.g., Wordnet).\", \"We included hypotheses about why semantics are not encoded in linear subspaces, but require a non-linear model to be extracted (please see the last paragraph of section 3.2).\", \"We included a new Section (please see Section 7) with further discussion of the results and implications of the findings, to shed light on how these findings can be used to improve semantic abilities of future works.\", \"We used the Princeton WordNet Gloss Corpus because it covers around 34000 different noun synsets with more than 42000 different lemmas. SemCor can also be useful for this task, but it is a little less diverse covering around 13000 noun synsets and 26000 lemmas.\", \"Layer-level results are now in sections 4.1 and 6, leading to new insights. Thanks for the suggestion.\", \"You correctly pointed out that it is not surprising that distant relations are more difficult to encode than others. To improve the relevance of the information included in the paper, we replaced the corresponding sub-charts of Figure 3 with a different set of charts that illustrate less-intuitive but relevant information to support the discussion. Specifically, we include: F1 v/s \\\"Number of Child Nodes\\\", \\\"Number of Senses\\\", \\\"Sense Ranking\\\" and \\\"Number of sibling nodes\\\". The charts about distances is now included in Appendix C.\", \"We included a new discussion throughout sections 5, 6 and 7 regarding that models in the same family have similar results, as suggested.\"]}",
"{\"title\": \"ICLR - TRACKING THE PROGRESS OF LANGUAGE MODELS BY EXTRACTING THEIR UNDERLYING KNOWLEDGE GRAPHS\", \"review\": \"Summary\\n\\nThis work addresses the question about how pre-trained language models encode semantic information. It adapts the methodology proposed in Hewitt & Manning (2019) for syntax to semantics, using the WordNet structure instead of a syntactic structure of a sentence to encode distances among word representations. The paper analyzes how embedding models encode suitable information to recreate the structure of WordNet. The study also shows evidence about the limitations of current pre-trained language models, demonstrating that all of them have difficulties to encode specific concepts.\\n \\nQuality\\n\\nThe proposed idea is very interesting, but the paper does not give a complete picture of what probing tasks can show and what their limitations are. The contribution of the paper is not clear. What can we learn from the experiment of the paper? How can we improve current language models? How can we exploit the distilled information?\\n\\nMissing reference for semantic probing tasks\\n\\nYaghoobzadeh, Yadollah, et al. \\\"Probing for Semantic Classes: Diagnosing the Meaning Content of Word Embeddings.\\\" Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 2019.\\nPeters, Matthew, et al. \\\"Dissecting Contextual Word Embeddings: Architecture and Representation.\\\" Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. 2018.\\n\\nMissing references for the usefulness of probing tasks\\n\\nSaphra, Naomi, and Adam Lopez. \\\"Understanding Learning Dynamics Of Language Models with SVCCA.\\\" Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). 2019.\\n\\nA recent one paper on the usefulness of probing tasks\\n\\nRavichander, Abhilasha, Yonatan Belinkov, and Eduard Hovy. \\\"Probing the Probing Paradigm: Does Probing Accuracy Entail Task Relevance?.\\\" arXiv preprint arXiv:2005.00719 (2020).\\n \\nClarity\\n\\n- The paper is clear and well written even if some references about the soundness of probing tasks are missing (see above) and a related discussion is missing too. In fact, the results of probing tasks have been questioned (see the references above) because it is not clear if the use of supervision allows the representation to adapt to the task.\\n- The related work section is a list of contributions and successes in probing tasks without a clear narrative.\\n- WordNet is not introduced\\n- Many acronyms are not defined (e.g.: WSD, MLP)\\n- I think that this part is important and should be clarified (last paragraph of Section 3.2): \\u201cTests based on linear transformations such as that proposed by Hewitt & Manning (2019) did not allow us to recover the WordNet structure, which indicates that the subspaces in which the word embeddings models encode the semantics are not linear\\u201d. Intuitions or even hypotheses about this behaviour are not given.\\n \\nOriginality\\n\\n- The analysis includes recent models such as ALBERT and T5.\\n- The idea of using the WordNet taxonomy to adapt the model proposed in Hewitt & Manning (2019) is very interesting\\n \\nSignificance\\n\\nThe proposed idea is very interesting and also the methodology is sound, but the conclusions are weak:\\n- It is intuitive that it is more difficult to encode distant relations than others.\\n- The fact that models in the same family have similar results is not discussed\\n- It is not explained why only the Princeton WordNet Gloss Corpus has been used and not larger datasets annotated with WordNet senses such as SemCor.\\n- Usually the models are evaluated at each layer, here all the layers are concatenated making it more difficult to understand where semantic information is stored.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"The authors' contribution is an extensive study on how different language models incorporate semantic knowledge on the concept level.\", \"review\": \"The authors conduct a study investigating how different language models incorporate semantic information in their respective learned representations. Investigating language models on their performance in concept-level tasks is motivated by the importance of the ability to organize and understand concepts in human intelligence. Another motivation is that other studies on the semantics in language models are not conclusive according to the authors, especially in determining where the semantic knowledge lies within the language models.\\nThe study is conducted by using a semantic probing classifier which \\u2013 in short \\u2013 is trained to determine whether two words (inputted as learned representations from the language models) are semantically related (according to WordNet) or not. This classifier also aids in recreating a sampled knowledge sub-graph from WordNet.\\nThe experimental section contains the evaluation of the two tasks, firstly the classification as described above and secondly the KG reconstruction.\", \"the_main_findings_of_the_study_can_be_summarized_as_follows\": [\"The authors show experimentally that models coming from the same family are strongly correlated\", \"The authors show the experimental outcomes of the tasks mentioned above\", \"The authors show how the individual layers of the language models contribute to the underlying knowledge\", \"The authors show for all models how they are affected by different semantic factors (9 different factors, including number of senses, graph depth etc.)\", \"The paper is clearly written and understandable and includes enough details to understand the implementation of the semantic probing classifier. The appendix contains detailed outcomes of the different experiments which, together with the result section, give a good overview of the experimental results.\", \"My recommendation is towards acceptance of the paper, because the authors contribute to a more detailed understanding on how language models incorporate semantic knowledge and where this knowledge might be located within the models. Exploiting those findings could potentially lead to an improvement on future models. Also, the findings per se give more insight on how the internals of large models process information, which is a step towards a more explainable AI.\", \"I have one question regarding the inter-comparability of the models; were the tested language models all trained on the same unlabeled textual data (or on data of comparable size), or did you use pre-trained models that were published alongside their respective papers?\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting technique, needs stronger message and more rigor\", \"review\": \"The paper 1) introduces a method to use three types of text embedding methods (non-contextual, contextual, LM -based) to predict word relatedness (as a binary classification problem) for pairs of words in wordnet 2) uses these relatedness scores to build proxies of the Wordnet graph 3) carry out experiments based on the two bullet points to compare semantic understanding abilities of the aforementioned embedding methods.\", \"major_comments\": [\"The paper carries out quite a few experiments with weakly connected goals. It looks like a combination of miscellaneous results based on a common (and limited) technique, rather than delivering a coherent message with interrelated takeaways from follow-up experiments.\", \"The technique to probe models is quite restricted in that it is centered around single word concepts. Given that contextualized models are utilized, it seems like a rather handicapped investigation of very powerful models.\", \"In section 4.3, authors try to make a point about correlations on visuals. This is a dangerous approach, and it would be much better to rely on numerical summaries of correlations. In fact, it is extremely hard to judge correlation by looking at pictures, because correlation needs to take into account the variability in an F1 metric with the other axis kept constant (only means are shown). A curve with less slope on Figure 3 might indicate a much higher degree of co-movement with the metric on the X axis if the randomness in y axis wrt at any point in x axis is very low. Authors should revisit statistical correlation, and preferably revamp this section. That said, I'm not quite convinced that it is a publication-worthy result to say similar methods (and each of the 3 buckets is very similar within) produce similar concept relatedness scores.\", \"It's hard to understand how the proposed probing classifier is different than concatenating $M(x)$ and $M(y)$ and directly applying MLP on it. One can choose an MLP with custom first hidden layer size and activation function that would be functionally equivalent to what's being proposed.\", \"There is definitely truth to the title, but I'd suggest not conflating the term \\\"knowledge graph\\\", which traditionally represents actual world knowledge, and not lexical databases.\", \"Minor comment\", \"Please write something under section 3 (before 3.1). Given that it's not clear on a first pass that you're introducing two different methods in 3.2 and 3.3, the empty space is a good opportunity to tell the reader about this fact.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Good analysis, but limited contribution\", \"review\": \"This paper analyzed how well the previously proposed pre-training models could encode the underlying knowledge graph by defining probing classifiers. The probe classified is trained on top of the pre-trained contextual presentation models, such as non-contextualized word embeddings (e.g., Glove), contextualized word embeddings (e.g., BERT), and generative language models (e.g., GPT-2), and tried to reconstruct the structure of the knowledge graphs.\\n\\nThis paper is well-written. Readers will easily understand what this paper did and tried to reveal. While it was not sufficiently clear why this paper adopts this approach, I think the idea of the probing classifiers and knowledge graph construction is a reasonably good idea to reveal how well the pre-training models encode the underlying knowledge graph.\\n\\nThe cons of this paper are the lack of a profound analysis of the experimental results. In Section 5, this paper tried to reveal what knowledge the existing pre-trained models work well or not well by using some statistics (e.g., the relative depth). While I think this approach is also good, readers will need more detailed information about analysis results to inspire new ideas to improve semantic learning abilities. For example, the number of samples in each concept depth and wordnet distance between concepts changes. Therefore, it is better to estimate confidence intervals for readers to precisely understand how the differences of the median F1-scores is important (or not important) in each depth or distance. Second, it is better to analyze which semantic category the pre-trained methods work well or not well. I guess that the concept depths and frequencies hugely change depending on the concept categories. It is helpful if this paper also elucidates which semantic cateogy the existing method work well and not well. Thirdly, it is better if this paper shed light on how readers can improve the semantic learning abilities based on these results. Without these proposals, I think the contribution of this paper is limited.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}"
]
} |
o3iritJHLfO | Bidirectional Variational Inference for Non-Autoregressive Text-to-Speech | [
"Yoonhyung Lee",
"Joongbo Shin",
"Kyomin Jung"
] | Although early text-to-speech (TTS) models such as Tacotron 2 have succeeded in generating human-like speech, their autoregressive architectures have several limitations: (1) They require a lot of time to generate a mel-spectrogram consisting of hundreds of steps. (2) The autoregressive speech generation shows a lack of robustness due to its error propagation property. In this paper, we propose a novel non-autoregressive TTS model called BVAE-TTS, which eliminates the architectural limitations and generates a mel-spectrogram in parallel. BVAE-TTS adopts a bidirectional-inference variational autoencoder (BVAE) that learns hierarchical latent representations using both bottom-up and top-down paths to increase its expressiveness. To apply BVAE to TTS, we design our model to utilize text information via an attention mechanism. By using attention maps that BVAE-TTS generates, we train a duration predictor so that the model uses the predicted duration of each phoneme at inference. In experiments conducted on LJSpeech dataset, we show that our model generates a mel-spectrogram 27 times faster than Tacotron 2 with similar speech quality. Furthermore, our BVAE-TTS outperforms Glow-TTS, which is one of the state-of-the-art non-autoregressive TTS models, in terms of both speech quality and inference speed while having 58% fewer parameters. | [
"text-to-speech",
"speech synthesis",
"non-autoregressive",
"VAE"
] | Accept (Poster) | https://openreview.net/pdf?id=o3iritJHLfO | https://openreview.net/forum?id=o3iritJHLfO | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"8yuOuoixLg",
"wc7GszqUOZg",
"qQJH4OGmi-9",
"62YyDhKdlPe",
"QuoKFXYNY5C",
"-p6HsCEf1mv",
"sbYDbQv8UIV",
"kd1_9L5TkJB",
"fCQkS_lGRN-",
"K-8KCHnVmK",
"5QX06tv9adT",
"4bygvQ1cRVB",
"pkC866oRd-8"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040411934,
1606190811579,
1605675016868,
1605514139112,
1605514058833,
1605513966801,
1605513791791,
1605513410896,
1605512904428,
1604045809775,
1603942455863,
1603876229254,
1602738500470
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3537/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3537/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3537/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3537/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3537/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3537/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3537/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3537/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3537/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3537/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3537/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3537/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Poster)\", \"comment\": \"Non autoregressive modelling for text to speech (TTS) is an important and challenging problem. This paper proposes a deep VAE approach and show promising results. Both the reviewers and the authors have engaged in a constructive discussion on the merits and claims of the paper. This paper will not be the final VAE contribution to TTS but represents a significant enough contribution to the field to warrant publication. It is highly recommended that the authors take into account the reviewers' comments.\"}",
"{\"title\": \"Thank you for your careful reply\", \"comment\": \"Thanks for your reply, and below are our answers to your questions.\\n \\nQ1. You mean when you use simple VAE, the TTS model also fails to learn? \\nA1. Yes. In that case, the training of the TTS model also failed. \\n \\nQ2. To show the robustness of your model, you should compare with more robust TTS\\u3000models such as FastSpeech, or Tacotron 2 with a location-relative attention mechanism. So this is a wrong statement \\\"our BVAE-TTS outperforms Glow-TTS\\\" \\nA2-1. Thank you for your careful reply and we all agree with your opinion. We are also very sorry that we cannot conduct the experiments on the non-AR models such as FastSpeech, because there is no officially released source codes or weights. \\nA2-2. As you mentioned, although BVAE-TTS has lost to Tacotron 2 in terms of speech quality on the in-domain dataset, we think our BVAE-TTS outperforms Glow-TTS, which is the state-of-the-art non-AR TTS model, in terms of both quality and speech as shown in the experiments.\"}",
"{\"title\": \"The revised paper is uploaded. (Last edited Nov 23 21:05 AoE)\", \"comment\": [\"We have uploaded a revised paper to incorporate the reviewers' comments, concerns, and suggestions.\", \"Thank all the reviewers for their constructive comments and extensive analysis that are really helpful to make our paper more complete.\", \"Specifically, the updated version includes:\", \"We have modified the configuration of the paper focusing on clarifying the motivation and advantages of BVAE-TTS.\", \"We have done much more extensive proofreading to improve its readability and we have tried to help readers understand our approach by adding more explanations, including the pseudo-codes for training and inference of BVAE-TTS in the appendix section.\", \"Supplementary material has been updated including the audio samples for MOS-OOD.\", \"Minor typos and inconsistent reference format have been fixed in the revised version. (Last edited Nov 23 21:05 AoE)\"]}",
"{\"title\": \"Responses to AnonReviewer4 (Part 2/2)\", \"comment\": \"Thank you so much for the detailed comments and suggestions. They are really helpful to improve the quality of our paper, especially to make our paper clearer and more convincing.\\nBelow are the itemized responses regarding each comment. We hope our answers can help our paper sound more convincing to you.\\nBecause of the maximum 5000 character limit, we write the answers in two parts. \\n \\nQ7. I'd recommend providing similar motivation for using dot-product soft attention plus straight-through argmax instead of Glow-TTS's alignment search or other competing approaches. Is it because it's a superior approach or just because it's different from existing approaches? \\nA7. Our attention mechanism with ST-argmax is a different approach rather than the improved one of Monotonic Alignment Search (MAS) of Glow-TTS. In terms of the alignment search algorithm, MAS is developed specifically for Glow-TTS. This is because it is trained to maximize the likelihood by directly obtaining the conditional prior distribution of latent representation \\u2018z\\u2019. Since the decoder of BVAE-TTS does not consist of normalizing flows, MAS can not be used in BVAE-TTS. Although it might be possible to use other monotonic alignment search algorithms such as [3], it needs additional dynamic programming computation after the dot-product, and it goes beyond the scope of this study.\\n\\nQ8. I don\\u2019t believe Tacotron is actually the first end-to-end TTS system. \\nA8. We missed the paper. We will remove the word \\u2018first\\u2019 and cite the paper too. Thank you for sharing this work.\\n\\nQ9. The Related Work section is fairly redundant with information that is already presented in the introduction. \\nA9. We are thinking of clarifying the advantages of BVAE-TTS over the flow-based TTS models and other previous TTS models in the introduction and related work sections. We will consider your suggestion and reflect it in the revised version.\\n\\nQ10. The first paragraph of Sec 4.1 is quite confusing upon a first reading. I had to read the second sentence (\\u201cVia the attention network\\u2026\\u201d) many times to understand what was being described. \\nA10. Thank you for pointing this out. We will edit the part to help the readers understand more clearly, especially focusing on the second sentence.\\n\\nQ11. I\\u2019m curious how you arrived at a sample temperature of 0.333. Was this empirically tuned for BVAE-TTS or in response to Glow-TTS\\u2019s findings? \\nA11. We chose the temperature 0.333 after listening to the samples generated with different temperatures, 0, 0.333, 0.6, 1.0. As the qualities are not that sensitive to the temperatures, we unified the temperature to 0.333 following the Glow-TTS. (It showed the best performance on LJSpeech in Glow-TTS.)\\n\\nQ12.\\u201cInference Time\\u201d: It seems important to include details about the hardware platform used to gather the speed results. \\nA12. We described our hardware setting in Section 5.1, but not mentioned how we use the hardware setting to measure the inference time, i.g. Are the inference times measured on CPU or GPU?. We will add the description in the revised version.\\n\\nQ13. There are minor English style and grammar issues throughout the paper that make the paper slightly more difficult to read. Please have the paper proofread to improve readability. \\nA13. We will make much more effort to improve the readability of our paper by having much more extensive proofreading.\\n\\n[1]: Kim, Jaehyeon, et al. \\\"Glow-TTS: A Generative Flow for Text-to-Speech via Monotonic Alignment Search.\\\" arXiv preprint arXiv:2005.11129 (2020). \\n[2]: Miao, Chenfeng, et al. \\\"Flow-TTS: A Non-Autoregressive Network for Text to Speech Based on Flow.\\\" ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020. \\n[3]: He, M., Deng, Y., He, L. (2019) Robust Sequence-to-Sequence Acoustic Modeling with Stepwise Monotonic Attention for Neural TTS. Proc. Interspeech 2019, 1293-1297, DOI: 10.21437/Interspeech.2019-1972.\"}",
"{\"title\": \"Responses to AnonReviewer4 (Part 1/2)\", \"comment\": \"Thank you so much for the detailed comments and suggestions. They are really helpful to improve the quality of our paper, especially to make our paper clearer and more convincing.\\nBelow are the itemized responses regarding each comment. We hope our answers can help our paper sound more convincing to you. \\nBecause of the maximum 5000 character limit, we write the answers in two parts. \\n \\nQ1. I have some slight concerns about the clarity of the presentation that makes it harder to understand the approach and its motivation. \\nA1. To help the readers understand our motivation and the approach better, we are planning to revise our paper by focusing on making it clearer and making our motivations and advantages more prominent. Furthermore, we will also add a pseudo-code explanation following the R3\\u2019s comment in the modified manuscript to improve its understandability.\\n\\nQ2. The quality of the speech produced by the system is only evaluated on a single dataset and uses only 50 synthesized examples in the subjective ratings. \\nA2. We totally understand why you think in that way. Here is our answer. We used the LJSpeech dataset because it is easy to access and many TTS papers also had used only the LJSpeech dataset as a single speaker dataset. In addition, although BVAE-TTS was trained only on the LJSpeech dataset, we evaluated the model on the other 50 out-of-domain sentences to see its generalization ability. When it comes to the number of sentences used for the test, we followed previous TTS papers [1, 2] measuring MOS for about fifty or less sentences.\\n\\nQ3. When considering the Glow-TTS paper (which this seems like a direct follow-up to), the system improvements seem quite incremental \\nA3. We can relate to your concern, however, we think BVAE-TTS is a new direction of non-AR TTS, rather than direct follow-up research of Glow-TTS. In this context, we think our model has so much potential, and we hope that it leads to many improved VAE-based TTS models.\\n\\nQ4. Listening to a few of the audio examples provided in the supplemental materials, I don\\u2019t get the sense that the audio quality is significantly better than that of Glow-TTS as is suggested by the MOS numbers (BVAE-TTS sounds a bit muffled to my ears relative to Glow-TTS). \\nA4. Thank you for carefully listening to the audio samples and sharing your impression. When we asked people in my laboratory to listen to the audio samples, many people said they didn\\u2019t feel that BVAE-TTS sounds muffled. On the contrary, in terms of naturalness, they said BVAE-TTS is even better than Glow-TTS, e.g. LJ0023-0016, LJ046-0191.\\nAlso, we think that the muffled sound stands out when the audio samples of BVAE-TTS and Glow-TTS are compared side-by-side. However, since we measured the MOS for the different TTS models independently, we guess BVAE-TTS obtained better MOS than Glow-TTS in terms of naturalness.\\n\\nQ5. It suffers from the duration averaging effects and inability to sample from the full distribution of prosodic realizations. \\nA5. As you point out, we tried to consider the durations also as other latent variables, but it was hard to successfully combine it with an attention mechanism. However, we think it is a very plausible approach and we will study it in future work.\\n\\nQ6. The motivation would be made clearer if you were more specific early on about the potential advantage of VAE's relative to flows however you want to describe it (parameter efficiency, more flexible layer architectures, more powerful transformations per layer, etc.). \\nA6. Thank you for your suggestion. We will revise the introduction and related work sections to clarify the advantages and potential of BVAE-TTS. In the sections, we will add more descriptions about the advantages of BVAE-TTS over the flow-based models that you mentioned.\"}",
"{\"title\": \"Responses to AnonReviewer2 (Part 2/2)\", \"comment\": \"We thank the reviewer for the extensive comments, which were very constructive and helpful for building a better paper.\\nBelow are our answers to your questions. Because of the maximum 5000 character limit, we write the answers in two parts. \\n \\nQ5. Is the non-autoregressive text-to-mel-spectrogram model necessary? For neural based TTS systems, most of time is in vocoder. \\nA5. Yes, your point is correct in terms of inference time. However, we think the fact that the inference time does not increase linearly as a text gets longer is still important. Furthermore, non-autoregressive generation shows its strength more in the out-of-domain data, i.e. very long input text, or the text patterns not existing in the training dataset. This is because the AR models suffer from accumulated prediction error. Thank you for your question and we will clarify it.\\n\\nQ6. Even if we assume the speed for text-to-mel-spectrogram is important, I don't think measuring speed with batch size = 1 is important, because non-autoregressive models can not be used for streaming. A proper comparison is measure FLOPS and throughput. \\nA6. We think comparing the inference time of TTS models is more practical to evaluate the models. This is because lower FLOPS for generating a speech does not guarantee shorter inference time. As far as we know, most previous studies on non-autoregressive TTS models also reported their inference time instead of FLOPS or throughput, including ParaNet and FastSpeech 1,2. [1,2,3,4]\\n\\nQ7. The paper claims their model is more compact, but there is no comparison for a smaller Tacotron2 model or other non-autoregressive model. \\nA7. Our initial motivation is to build a new VAE-based non-autoregressive TTS model instead of developing a compact TTS model. Therefore, when we compare the number of parameters, we mainly compare BVAE-TTS and Glow-TTS in terms of both MOS and the number of parameters. This is because both models are non-AR TTS models without a teacher model.\\n\\n[1]: Ren, Yi, et al. \\\"Fastspeech: Fast, robust and controllable text to speech.\\\" Advances in Neural Information Processing Systems. 2019. \\n[2]: Kainan Peng, Wei Ping, Zhao Song, and Kexin Zhao. Non-autoregressive neural text-to-speech. In Proceedings of the 37th International Conference on Machine Learning, pp. 10192\\u201310204. PMLR, 2020. \\n[3]: Jaehyeon Kim, Sungwon Kim, Jungil Kong, and Sungroh Yoon. Glow-tts: A generative flow for text-to-speech via monotonic alignment search. arXiv preprint arXiv:2005.11129, 2020. \\n[4]: Ren, Yi, et al. \\\"FastSpeech 2: Fast and High-Quality End-to-End Text-to-Speech.\\\" arXiv preprint arXiv:2006.04558 (2020).\"}",
"{\"title\": \"Responses to AnonReviewer2 (Part 1/2)\", \"comment\": \"We thank the reviewer for the extensive comments, which were very constructive and helpful for building a better paper.\\nBelow are our answers to your questions. Because of the maximum 5000 character limit, we write the answers in two parts. \\n \\nQ1. What is the difference between your model and FastSpeech 2? \\nA1. As you mentioned, FastSpeech 2 said it succeeded in removing the teacher-student distillation. However, to achieve this, it requires additional duration labels and other acoustic features such as pitch and energy information obtained from external tools. On the contrary, since our model only utilizes a text-mel-spectrogram pair, it does not depend on the external tools, and so the training is simpler than the FastSpeech2. We think that the differences will be helpful to clarify the advantages of our model, so we will add this in the modified version with a citation of the FastSpeech2 paper. Thank you for your fruitful question. \\n \\nQ2. ParaNet and FastSpeech1, 2 are very related to this paper. But why only compare with Glow-TTS? \\nA2. We agree that ParaNet and FastSpeech 1,2 can be good baselines for the experiment, but the official source codes for the models are not provided. Therefore, to fairly compare the performance of BVAE-TTS to other TTS models, we choose two models, Tacotron 2 and Glow-TTS, where each represents an AR and a Non-AR TTS model. A pre-trained Glow-TTS model is provided by the author. Although Tacotron 2 from NVIDIA is not provided by the official author, it is widely used and is recognized in the field of speech synthesis as being correctly implemented. \\n \\nQ3. The paper has an ablation study section, but it is missing a couple very simple baseline: 1) remove VAE, purely predict mel-features based on duration and phoneme embeddings; 2) use a simple VAE instead of hierarchical one. \\nA3. When we removed VAE, the TTS model failed to learn mel-spectrogram generation, and when we used simple VAE, the result was the same. Therefore, we worry that the results would be not that informative to be compared with BVAE-TTS. However, if the conclusion of this discussion is that we need to do the further ablation studies, we will report the results in the modified version.\\n\\nQ4. Tacotron 2 shows better speech quality for in-domain dataset and worse for out-of-domain dataset. However, there are no audio samples generated using out-of-domain texts in supplementary material. Could you also provide out-of-domain audio samples? \\nA4. Yes. We will update the supplementary files including the audio samples generated using OOD text data.\"}",
"{\"title\": \"Responses to AnonReviewer3\", \"comment\": \"We thank the reviewer for the great feedback. The feedback is very constructive and helpful for building a better paper.\\nBelow are our answers to your questions. \\n \\nQ1. It is quite difficult to understand how the BVAE-TTS works. For example, what are the exact layer inputs and outputs, and how the parameters of the normal distributions are used? \\nA1. Thank you for pointing this out, and we also agree that the architecture of BVAE-TTS is quite complicated. In BVAE-TTS, the mean and covariance values are predicted with a 1-D Conv layer (+ softplus). The delta values make the difference between prior and posterior caused by the data observation, and they are not the values accumulated along the layers. Following your suggestion, we will add the pseudo-code explanation of the network in the modified manuscript. Thank you for your suggestion.\\n \\nQ2. Why is the output of the attention layer not provided to the encoder? \\nA2. If the encoder you mentioned is the bottom-up path of BVAE-TTS, the attention is conducted only at the top of the bottom-up path, instead of between every BVAE block. The output is then inputted directly to the top-down path. We think you have some misunderstandings about this part. However, we think it could be an interesting research to use attention mechanisms between the BVAE blocks, so that the encoder effectively extracts acoustic features that are disentangled to the textual contents. Thank you for your interesting suggestions.\"}",
"{\"title\": \"Responses to AnonReviewer1\", \"comment\": \"We thank you for your interest in our research and we also hope it becomes a good starting point for VAE-based TTS research.\\nBelow are our answers to your questions.\\n\\n Q1. How does the duration modeling result in 'monotonic\\u2019 alignment? \\n A1. \\u2018Monotonic alignment\\u2019 means phoneme representations are used in an orderly manner. In other words, there is no case where a phoneme representation that appears later in a sentence is used earlier in the decoder. In this context, if we inflate the phoneme representations based on their durations, the above situation never happens. Thank you for your fruitful question and we will clarify this in the revised version.\\n \\nQ2. A comparison with an equivalent soft attention implementation might be insightful. \\nA2. It is the situation where we train BVAE-TTS without using ST-argmax technique. (Sec 5.3.2.) When we remove the constraint of a one-to-one mapping between phonemes and mel-spectrogram frames, our model fails to learn the alignment. Thank you for pointing this out and we will clarify this in the revised version.\\n \\nQ3. I am wondering how this model would perform in a multi-speaker dataset. One aspect that the paper does not touch in detail is in its capabilities as a generative model. It would be interesting, for instance, to see if this model can in any way separate speaker style from content with a multi-speaker model. \\nA3. Thank you for your suggestion, however, our initial aim was to develop a novel TTS model based on VAE architecture, and so we focus on succeeding in generating speech with competitive quality. However, we are actually planning to extend our model to the multi-speaker scenario in future work. For example, we expect that, by letting the model extract a global latent vector from a mel-spectrogram, the model can change the global style of the speech (e.g. speaker identity) by controlling the global latent vector.\"}",
"{\"title\": \"Novel, fast architecture with many insightful ideas - accept\", \"review\": \"Post rebuttal and discussion\\n========================\\nSeveral reviewers have pointed out that the paper needs more comparisons/ablations with existing models (e.g. Paranet/Fastnet). To this end, I think we at least need a comparison with Paranet, which is a 'comparable' non-autoregressive CNN based VAE based model with a few other components such as attention distillation. \\n\\nThere are components in the paper that could do with more ablation studies \\n- argmax with straight through estimator\\n- some guidelines on BVAE blocks and tuning\\n\\nIn light of these points, together with the fact that we don't have any theoretical novelties in this paper, I reduce my score to 6. Even so, I feel that the paper would be a valuable contribution because \\na) A generative model (GAN/VAE/VQVAE/Flow based models/score matching based models) might add extra benefit in the synthesis problem, as compared with a supervised model without a similar generative component such as Tacotron. The NVAE has been shown to significantly outperform the regular VAE in image generation tasks. It stands to reason that it would do well in speech generation also. \\nb) Speed, robustness and ease of implementation (although this remains to be demonstrated).\\n \\nInitial Review\\n===========\\n\\nThis paper proposes a non-autoregressive (non AR) way to perform text to speech synthesis. It uses a VAE based setup - adapted from the recent image paper NVAE to build two stacks of hierarchical VAE blocks (in priors), one going bottom up and the other, top down. The key claims are that it results in improved speed, and reduced model footprint from using a non AR architecture, with excellent quality comparable to the best autoregressive/recurrent methods in Tacotron2 [2] and non AR glow-TTS[3].\\n\\nThe work contains many interesting ideas for TTS, and I am very interested in seeing how this work pans out in practical speech synthesis applications.\", \"key_ideas\": \"1. The bidirectional stack, which they call BVAE is adapted from the recent NVAE work which has produced stellar image generations. The model uses 1D convolutions under the hood, in contrast with the fashionable, but slow autoregressive flows or recurrent models. If one can get such a model to work, it could be advantageous in effecting savings in computational time and model size. \\n\\nDuring training, at the top of the bottom-up stack, text features are inflated to the size of the mel spectrogram features, and reconstructed with the top down BVAE stack. For inference, text is inflated to an expanded text matching audio mels, and then sent down the top-down stack to give a mel sample.\\n\\n2. Attention modeling: An important consideration here is to align text and mel, commonly done with an attention mechanism. In this work, the attention alignment shows up as a duration model, which is rather interesting, and seemingly gives additional flexibility. After aligning text and mel (using dot product), the alignment can be reinterpreted as a duration model by comparing phoneme and mel frame alignments. Furthermore, they use a discrete match with argmax rather than a sum over all attention alignments as is generally done. This also necessitates the use of the straight-through estimator while backpropagating since the durations are rounded entities. This type of modeling seems also to be used in the Glow-TTS work but with alignments determined through dynamic programming.\\n\\nI found the result that the model is not very sensitive to alignment mismatches to be quite remarkable.\\n\\n3. Fittings for robustness during inference: They use several instructive ideas - jittering text, adding positional embeddings, diagonal penalty (since alignment is mostly diagonal) and KLD annealing. \\n\\n4. Analyses - ablations to see which of the VAE blocks affect the result by varying temperature (from Glow [3]).\", \"my_thoughts\": \"Generally, the paper made for fascinating reading. Having worked with Tacotron, I have always felt that adding a VAE to that (RNN based) setup would improve its generative capabilities by giving it additional regularization qualities, among other things. That we can see the model perform better when we add jitter and can also respond to the duration specified seems to corroborate that in a loose way (figure 10). \\n\\n- Could the authors clarify how the duration modeling results in 'monotonic' alignments? As far as I can see, the argmax guarantees a unique match, but is monotonicity necessary?\\n\\nFrom section 5.3.2:\\n\\\"Since the text is forced to be used monotonically in the duration-based generation, it makes the model more robust to the attentionerrors while making fewer pronouncing mistakes.\\\"\\n\\n- A comparison with an equivalent soft attention implementation might be insightful. \\n\\n- Multi Speaker TTS: I am wondering how this model would perform in a multispeaker dataset, say libritts. One aspect that the paper does not touch in detail is in its capabilities as a generative model. It would be interesting, for instance, to see if this model can in any way separate speaker style from content with a multispeaker model.\\n\\nOverall, I think this paper would be a good addition to the body of speech synthesis work, and recommend that it is accepted.\\n\\n\\n[1] NVAE: https://arxiv.org/pdf/2007.03898.pdf\\n[2]: Tacotron2: https://arxiv.org/pdf/1712.05884.pdf\\n[3] Glow-TTS: https://arxiv.org/pdf/2005.11129.pdf\\n[4]: Glow: https://arxiv.org/pdf/1807.03039.pdf\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Potentially valuable contribution to parallel TTS, with some concerns.\", \"review\": \"Summary:\\nThis paper presents BVAE-TTS, which applies hierarchical VAEs (using an approach motivated by NVAE and Ladder VAEs) to the problem of parallel TTS. The main components of the system are a dot product-based attention mechanism that is used during training to produce phoneme duration targets for the parallel duration predictor (that is used during synthesis) and the hierarchical VAE that converts duration-replicated phoneme features into mel spectrogram frames (which are converted to waveform samples using a pre-trained WaveGlow vocoder). The system is compared to Glow-TTS (a similar parallel system that uses flows instead of VAEs) and Tacotron 2 (a non-parallel autoregressive system) in terms of MOS naturalness, synthesis speed, and parameter efficiency.\", \"reasons_for_score\": \"Overall, I think the system presented in this paper could be a valuable contribution to the field of end-to-end TTS; however, from a machine learning perspective, the contributions are incremental and quite specific to TTS. In addition, I have some slight concerns about the clarity of the presentation that made it harder to understand the (fairly simple) approach and its motivation than I\\u2019d expect from an ICLR paper. Finally, the quality of the speech produced by the system is only evaluated on a single dataset and uses only 50 synthesized examples in the subjective ratings. For these reasons, I feel this paper would be a better fit for a speech conference or journal after addressing the evaluation and presentation issues, but I would still support acceptance if other reviewers push for it and my concerns are addressed.\", \"high_level_comments\": [\"The speed, parameter efficiency, and MOS results are quite promising. However, when considering the Glow-TTS paper (which this seems like a direct followup to), the system improvements seem quite incremental (replace flows with HVAEs and replace the monotonic alignment search with soft attention plus argmax).\", \"Incremental system improvements are great if they result in significant improvements that are demonstrated through rigorous experiments, however, compared to Glow-TTS, the experiments are not nearly as comprehensive and convincing. Listening to a few of the audio examples provided in the supplemental materials, I don\\u2019t get the sense that the audio quality is significantly better than that of Glow-TTS as is suggested by the MOS numbers (BVAE-TTS sounds a bit muffled to my ears relative to Glow-TTS).\", \"Since this system uses the same deterministic duration prediction paradigm as Glow-TTS (and other parallel TTS systems), it suffers from the same duration averaging effects and inability to sample from the full distribution of prosodic realizations.\", \"The motivation would be made clearer if you were more specific early on about the potential advantage of VAE's relative to flows however you want to describe it (parameter efficiency, more flexible layer architectures, more powerful transformations per layer, etc.).\", \"I'd recommend providing similar motivation for using dot-product soft attention plus straight-through argmax instead of Glow-TTS's alignment search or other competing approaches. Is it because it's a superior approach or just because it's different from existing approaches?\"], \"detailed_comments\": [\"Section 2: I don\\u2019t believe Tacotron is actually the *first* end-to-end TTS system. Maybe it was the first to gain widespread attention, but I know that char2wav (if you count that as e2e TTS) preceded it chronologically in terms of first arxiv submission date.\", \"Section 2: The Related Work section is fairly redundant with information that is already presented in the introduction. It might be worth combining the two sections. This should free up space for additional experiments, explanations, or analysis.\", \"Section 4.1: The first paragraph here was quite confusing upon a first reading. I had to read the second sentence (\\u201cVia the attention network\\u2026\\u201d) many times to understand what was being described.\", \"Section 5.2: I\\u2019m curious how you arrived at a sample temperature of 0.333. Was this empirically tuned for BVAE-TTS or in response to Glow-TTS\\u2019s findings?\", \"Section 5.2, \\u201cInference Time\\u201d: It seems important to include details about the hardware platform used to gather the speed results.\", \"There are minor English style and grammar issues throughout the paper that make the paper slightly more difficult to read. Please have the paper proofread to improve readability.\", \"Update (Nov 24, 2020):\", \"After reading through the author responses and the updated version of the paper, I feel like a sufficient number of my concerns have been addressed to increase my score to 6. Specifically, the motivation has been made clearer, the related work section is no longer redundant with the intro, and the authors gave an adequate explanation about the necessity of their attention-based alignment method.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"An interesting paper with some nonsolid claims\", \"review\": \"This paper combined fastspeech with a hierarchical VAE (or ladder VAE? in their paper it called bidirectional VAE) to achieve parallel and high quality text-to-mel syntheisis.\", \"the_paper_claims_these_contributions\": \"(1) Introducing an online fashion for duration prediction, instead of distillation in FastSpeech and ParaNet. So the model is more e2e. (2) Introducing an BVAE, which extract features hierarchically to better capture prosody (overcome one-to-many) problem. During inference, can use the prior directly. This is directly than previous VAE application in TTS, which is only use to capture residual information. (3) it's faster and with same quality as autoregressive Tacotron and with better quality than other published non-autoregressive model.\\n\\nThe key strength of this paper is the architecture is new. I think using a hierarchical VAE here is reasonable. \\n\\nMy concerns mostly from the conclusion and experiments.\\n(1) The paper claims compare to previous non-autoregressive model, they are more e2e, since both FastSpeech (also use duration predictor) and ParaNet (without VAE) rely on distillation. However, there is another paper called FastSpeech 2 (https://arxiv.org/abs/2006.04558, published on June 8th), the model also claim \\\" 1) removing the teacher-student distillation to simplify the training pipeline\\\". Can the author explain the difference? Also i think need to cite that paper because it published in June and very related.\\n(2) As mentioned in (1), ParaNet and FastSpeech1/2 are very related to this paper. But why only compare with waveglow?\\n(3) The paper has an ablation study section, but it missing couple very simple baseline. 1) remove VAE, purely predict mel-features based on duration and phoneme embeddings. 2) using a simple VAE instead of hierachical one. How it affect the performance.\\n(4) One key claim of this paper is that it is as good as Tacotron 2. However, for the in-domain test, the 0.2 behind. By listening the audio samples provided by the author, it indeed significantly worse. The out of domain looks better, I suspect the reason is Tacotron 2 has some attention failures due to it not robust as duration based model. A proper baseline here, is a FastSpeech model. Could you also provide OOD samples? It's really hard to believe such prosody gap can be filled by switch domain.\\n(5) Back to the original motivation, why we need non-autoregressive model for TTS? For neural based TTS system, most of time is in vocoder. Even we assume the speed for mel-to-spec is important, I don't think measure speed with batch size = 1 is important, because non-autoregressive model can not be streaming. A proper comparison is measure FLOPS and throughput. This might make more sense for offline TTS. This is a minor concern, as long as the quality are good enough.\\n(6) The paper claims their model is more compact, but there is no comparison for a smaller Taco2 model or other non-autoregressive model. \\n\\nIn summary, based on my understanding, this paper proposed a new non-autoregressive based text-to-mel model with quality regression but possible better robustness. My opinion is that it's a borderline for ICLR, since the importance of the proposed VAE was not well justified, and the quality was not as good as autoregressive model.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Great results and thorough evaluation with a well-motivated model, but presentation could be better\", \"review\": \"Summary: Neural models that autoregressively generate mel spectrograms from text (or phonemes), such as Tacotron, have been used to generate high quality synthetic speech. However, they suffer from slow inference speed due to their autoregressive nature. To alleviate this, non-autoregressive models have been proposed, such as FastSpeech and Glow-TTS. The proposed model, BVAE-TTS, is yet another non-autoregressive speech synthesis model (outputting spectrograms), with two key advantages over the aforementioned models: (a) no autoregressive teacher model is required, as in FastSpeech, which simplifies training, and (b) fewer parameters are needed than in Glow-TTS, since there is no bijectivity constraint (allowing a more expressive architecture to be used). Models are compared with inference speed and MOS, and BVAE-TTS compares favorably on both both metrics when compared to Glow-TTS.\", \"pros\": \"1. The evaluation of the model is done well, in a clear way. LJSpeech is used, a dataset which is commonly used and easily accessible. MOS and inference speech are provided, and error bars are provided for MOS values. BVAE-TTS is compared to Glow-TTS and Tacotron 2 (one other non-autoregressive model, and one well-known AR baseline), and hyperparameters are provided. A single vocoder (pretrained WaveGlow) is used on all models, isolating the effect of the spectrogram prediction model used.\\n\\n2. Section 4.3, pertaining to using attention distributions to learn a duration predictor, is interesting and novel. Using positional encodings is standard and using a loss guide is unsurprising. However, while jitter and straight-through estimators are not uncommon, all of these things together make a compelling and novel approach to using attention to infer discretized durations and compensate for that train-test mismatch well. I believe that a similar technique could be used in other models as well.\\n\\n3. The model is an application of similar ideas from image synthesis, which is interesting, in that it demonstrates that some of those techniques work equally well for spectrogram synthesis. This sort of cross-modal result points to the strength of the method being used, which is a valuable data point for the research community.\", \"cons\": \"1. The biggest weakness of this paper, in my view, is that deciphering the model itself is quite difficult. Although the model bears resemblance to NVAE (for which code is released), understanding the fine details is tricky, and the paper does little to aid in that effort. \\n\\nIn particular, understanding the exact layer inputs and outputs and parameters of the normal distributions being used is difficult, and I believe the paper would benefit significantly from a pseudocode explanation of the network. For example, I did not understand why the generative model produced both $\\\\mu_l$ and $\\\\Delta \\\\mu_l$, and whether $\\\\mu_l$ was predicted with a dense layer or was the accumulation of the prior BVAE stacks' $\\\\Delta \\\\mu_l$ values (and similar for $\\\\Sigma$). \\n\\nI also wonder why the output of the attention layer is not provided to the encoder; perhaps there is a fundamental reason for this which I am missing, or perhaps this is simply an architecture choice.\\n\\nA very clear explanation of the method itself, perhaps as psuedocode for where the means and variances come from and which features they interact with and what it sampled when, would in my view make this among the top papers.\", \"recommendation\": \"Accept. The paper is well written and results are strong, although I would prefer if the method itself were explained more clearly.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
XOuAOv_-5Fx | Uncertainty Calibration Error: A New Metric for Multi-Class Classification | [
"Max-Heinrich Laves",
"Sontje Ihler",
"Karl-Philipp Kortmann",
"Tobias Ortmaier"
] | Various metrics have recently been proposed to measure uncertainty calibration of deep models for classification. However, these metrics either fail to capture miscalibration correctly or lack interpretability. We propose to use the normalized entropy as a measure of uncertainty and derive the Uncertainty Calibration Error (UCE), a comprehensible calibration metric for multi-class classification. In our experiments, we focus on uncertainty from variational Bayesian inference methods and compare UCE to established calibration errors on the task of multi-class image classification. UCE avoids several pathologies of other metrics, but does not sacrifice interpretability. It can be used for regularization to improve calibration during training without penalizing predictions with justified high confidence. | [
"variational inference",
"uncertainty",
"calibration",
"classification"
] | Reject | https://openreview.net/pdf?id=XOuAOv_-5Fx | https://openreview.net/forum?id=XOuAOv_-5Fx | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"2fp9CQzhQz",
"Fd2OJAEd-AD",
"4pFBsGON1Yp",
"qQ0uD5TVX7A",
"CvMv16ohWIe",
"b9qljoluQPv",
"UHyloZDTaMf",
"kX4DvwuE8D",
"rFTTGLW8FiE",
"TzHBni0BBH",
"4wLgZuv5QVo",
"cXlTFYNwq4S",
"iymdlltTViD",
"H6kLa6Z5xi2",
"ml__1UKzm7g",
"n75AOuEvCDu",
"jYJLKBQhSn_",
"jyaNcLhjGjG",
"sa5tXQc0DzB",
"wd8-Yb9VujY",
"2nu6qeuQS_B",
"8nTzC1CtA26",
"uuO5BygoZt-",
"C7VCzjPHm2j"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040360049,
1606244170719,
1606242242988,
1606201356306,
1606159446783,
1606077858844,
1605915070787,
1605884738799,
1605879402532,
1605878491656,
1605877486369,
1605866248232,
1605864882335,
1605618670015,
1605436613590,
1605260754465,
1605260646765,
1605257340766,
1605192755334,
1605192254779,
1604020669634,
1603950330703,
1603941605401,
1603897934480
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3535/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3535/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3535/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3535/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3535/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3535/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3535/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3535/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3535/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3535/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3535/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3535/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3535/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3535/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3535/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3535/Authors"
],
[
"~Jize_Zhang1"
],
[
"ICLR.cc/2021/Conference/Paper3535/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3535/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3535/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3535/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3535/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3535/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This work proposes a novel metric for measuring calibration error in classification models.\", \"pros\": [\"Novel calibration metric addressing limitations of previously used metrics such as ECE\"], \"cons\": [\"Limited experimental validation on CIFAR-10/CIFAR-100 only\", \"Unclear impact beyond proposing a new calibration metric\", \"Unclear value of using the proposed UCE metric for regularization and OOD detection\", \"All reviewers appreciate the aim of the work to produce a calibration metric that addresses shortcomings of commonly used existing metrics such as expected calibration error (ECE), which is known to be sensitive to discretization choices. However, all reviewers remain in doubt whether the proposed metric (uncertainty calibration error, UCE) is truly a better metric of calibration than previous proposals. This doubt comes from two sources: 1. limited experiments that do not convincingly show the usefulness of UCE; and 2. interpretability of UCE not being as intuitive to the reviewers. The experiments also use UCE as regularizer but the benefit of doing so over simple entropy regularization is not clear.\", \"Overall the work is well-motivated and written and the proposed UCE measure is interesting. However, the reviewers remain unconvinced of the claimed benefits and the potential impact for measuring or improving calibration.\"]}",
"{\"title\": \"Additional Experiments\", \"comment\": \"Dear AnonReviewer3, please see the rebuttal version for the added toy experiments. We are still working your suggested active learning experiment. However, due to the time constraints of the rebuttal phase, we were not able to provide these results yet. We will include the results in a possible camera-ready version.\"}",
"{\"title\": \"Please See Rebuttal Version\", \"comment\": \"Dear reviewers, we thank you again for your valuable feedback that helps us greatly to improve our manuscript. Please see \\u00a7 7 in the rebuttal version for new and updated results. We will integrate the parts of \\u00a7 7 into the main text and update the rest of the manuscript according to your suggestions in a possible camera-ready version. We hope to have addressed all raised concerns and appreciate an update on the rating.\"}",
"{\"title\": \"Additional Experiments\", \"comment\": \"We already added the results for SVHN and will add the results for Fashion-MNIST later today.\"}",
"{\"title\": \"quantile ECE\", \"comment\": \"I believe you're correct that Ovadia et al. did not compare quantile and fixed-width ECE, and it is unclear in the paper and code which one they used (there is a ` get_quantile_bins` method, but it doesn't appear to be called). Thank you for updating Table 2.\\n\\nIn a response to another comment, you mentioned that you were running experiments on Fashion-MNIST and SVHN, are those results in?\"}",
"{\"title\": \"Regarding Quantile ECE\", \"comment\": \"Please see Sect. 7 for our updated Table 2 as per your suggestion. We compare the regularization methods at optimal temperature as suggested by Ashukha et al. (2020).\\nThank you for pointing out this highly relevant paper, which we should have considered in the first place. We have carefully read the paper and did not find any comparison between quantile ECE and fixed-width ECE. After reviewing the provided source code (https://github.com/google-research/google-research/blob/master/uq_benchmark_2019/metrics_lib.py), we think that Ovadia et al. (2019) only used fixed-width bins in ECE computation. The only relevant mentioning seems to be:\\n\\n> When bins $ \\\\\\\\{\\\\rho_s : s\\\\in 1\\\\ldots S \\\\\\\\} $ are quantiles of the held-out predicted probabilities, $|B_s|\\\\approx|B_k|$ and the estimation error is approximately constant.\\n\\nWe think that this describtion of using quantiles as bin edges is equivalent to ACE. Ovadia et al. (2019) state that the ECE estimation error is constant across all bins when using quantiles. However, we did not find any statement saying that quantile ECE is robust against a varying number of bins. Please let us know if we have misunderstood anything.\"}",
"{\"title\": \"Appreciate the clarifications\", \"comment\": \"Thank you for clarifying the role of unnormalized entropy and the point you're making in Figure 3. Quantile ECE was used in Ovadia et al., 2019, \\\"Can you trust your model's uncertainty? Evaluating predictive uncertainty under dataset shift\\\", and reduces bias as compared with fixed-width ECE. Looking forward to your updated manuscript, after which I'll reevaluate my rating.\"}",
"{\"title\": \"UCE\", \"comment\": \"In this specific binary classification example with $ C=2 $, our Proposition 1 does not hold and we would rather suggest to use max p instead of $ \\\\tilde{\\\\mathcal{H}} $ (which is effectively regularization with classwise ECE (Kull et al., 2019)). Then, we would have\\n```\\nprint(cross_entropy(pred_a, true))\\nprint(cross_entropy(pred_b, true))\\n\\nprint(classwise_ece(pred_a, true))\\nprint(classwise_ece(pred_b, true))\\n```\\n```\\n0.2558\\n0.2298\\n\\n0.0100\\n0.0800\\n```\\nHowever, in a multiclass scenario, the same holds for UCE (plus the benefits of UCE ).\"}",
"{\"title\": \"Good catch!\", \"comment\": \"Thank you for spotting the mistake! It seems that when I flip the labels in my example, the (over)confident predictions still have the higher loss:\\n```true = np.asarray([0] * 9 + [1]```\\n('less confident', 0.36177298426628113, 'confident', 0.9211241006851196)\\n\\n(Although I would agree that your example is more realistic).\\n\\nI wonder how UCE compares here, but would you agree that this example indicates that the difference between NLL and UCE as a regulariser is perhaps more subtle than described in the paper?\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you again for your valuable feedback. We are currently working hard on a revision of our manuscript and would appreciate an updated rating if we were able to address all your concerns.\"}",
"{\"title\": \"Response to NLL Concerns\", \"comment\": \"Dear AnonReviewer3, thank you for this helpful code example.\\n\\n1. We think that you may have swapped the labels in line 7. Let us prove a specific code example:\\n```\\npred_a = np.asarray([[0.9,1-0.9]]*9 + [[0.8,1-0.8]])\\npred_b = np.asarray([[0.99,1-0.99]]*9 + [[0.89,1-0.89]])\\ntrue = np.asarray([0] * 9 + [1])\\n'less confident', float(cross_entropy(np.log(pred_a), true)), 'confident', float(cross_entropy(np.log(pred_b), true))\\n```\\n('less confident', 0.2557682553354537, 'confident', 0.22977279358712344) \\nI.e. NLL further pushes the confidence of the predictions to 1.0, favoring overconfidence.\\n2. By classwise we mean $ \\\\frac{1}{C} \\\\sum_{c=1}^{C} \\\\mathrm{UCE}_{c} $, where $ \\\\mathrm{UCE} _{c} $ is computed for samples of class $ c $. We will add sentences on this to our revision.\"}",
"{\"title\": \"re 2\", \"comment\": \"Re 2: This is a good point, agreed. The accompanying proof of this fact is also of value here. I do think it's helpful to acknowledge this in the paper.\", \"re_3\": \"That sounds good! Perhaps exploring the limit all the way to 1 bins can help clarify that there is a minimum number of bins required to correctly estimate the metric.\", \"re_4\": \"Great, I think that insight is helpful for the reader.\", \"re_5\": \"This seems to me to be the missing piece of the story. If you propose that UCE should be the go-to metric for comparing different methods/models on calibration error, it seems important to demonstrate practical settings in which UCE is more informative than alternative metrics for model selection.\"}",
"{\"title\": \"regarding NLL\", \"comment\": \"Re 1.\\nIt seems that this is an incorrect characterisation of NLL. NLL heavily penalizes overconfident incorrect predictions, see the following:\\n```\\nimport jax\\nfrom jax import nn\\nfrom jax import numpy as np\\n\\npred_a = np.asarray([[0.8,0.2]]*10)\\npred_b = np.asarray([[0.9999,1-0.9999]]*10)\\ntrue = np.asarray([1] * 9 + [0])\\n\\ndef cross_entropy(logprobs, target_class):\\n nll = np.take_along_axis(logprobs, np.expand_dims(target_class, axis=1), axis=1)\\n ce = -np.mean(nll)\\n return ce\\n\\n'less confident', float(cross_entropy(np.log(pred_a), true)), 'confident', float(cross_entropy(np.log(pred_b), true))\\n```\\n('less confident', 1.470808506011963, 'confident', 8.28931713104248)\\n\\nI'm not sure what you mean with classwise. Do you mean binary multi-class prediction rather than categorical prediction?\"}",
"{\"title\": \"Response\", \"comment\": \"Dear AnonReviewer4, thank you for appreciating our work and for your thorough review. Below we try to respond to each issue raised and hope to meet your expectations.\\n\\n1. Your main concern seems to be that we do not make a strong statement that our metric is beneficial. As it is difficult to highlight the distinct strengths of our metric in real-world experiments, we additionally conducted toy experiments that clearly show cases where UCE is able to capture miscalibration but other metrics fail. We emphasize that ECE and MMCE can be minimized by models with constant output and that ACE produces arbitrary values for a varying number of bins. We are confident that our metric provides reasonable benefits and can be useful to the community. We will highlight the benefits more in our revision and hope that we can convince you of this as well.\\n2. When we compare at optimal temperature (as suggested by Ashukha et al. (2020)), UCE regularization is at least as good as MMCE reg. and outperforms entropy reg. (see Fig. 6 in appendix). We will add a table with metric values at optimal temperature to the main text to highlight the benefits of UCE regularization. However, using the UCE as a regularizer is not our main contribution and, as you have already mentioned, rather an interesting additional feature.\\n3. We think that improved model selection is mainly given by avoiding the pathologies of the other metrics. Secondly, we argue that UCE is as interpretable as ECE and easier to understand for practitioners than e.g. MMCE. We would gladly conduct another experiment that can directly measure and compare the metrics, if you have any suggestion.\\n\\nAnswers to additional comments\\n\\n1. Thank you for your suggestion. We will include notes about NLL and Brier being strictly proper scoring rules.\\n2. Thank you for pointing out that the definition of perfect calibration can be traced back beyond Guo et al., (2017). We will revisit Brier (1950) and update our manuscript accordingly.\\n3. Thank you for this comment. We agree that our assumption is strong for small numbers of classes (e.g. $C < 10$). However, we think that this assumption is reasonable in empirical settings, where $ C = 100 $ (CIFAR-10) or $ C = 1000 $ (ImageNet). We explicitly mention the multi-class setting for our metric in the title of our paper. To shed additional light on this, we will add a figure to our manuscript that shows the effect of an increasing number of classes on the normalized entropy and will discuss this caveat in our conclusion.\\n4. Thank you for pointing out this confusion. As you suggested above, we will add a note about strictly proper scoring rules to our revision. Further, we will visually separate the NLL and Brier score values in Table 1 in order to not directly compare calibration metrics to strictly proper scoring rules.\\n5. As already mentioned above, UCE regularization outperforms entropy regularization when compared at optimal temperature (Ashukha et al., 2020). This can already be seen in Figures 6 & 7 and holds for both CIFAR-10 and CIFAR-100. We will further highlight this in our revision. Additionally, we will follow your suggestion and add the non-regularized baseline to the table.\\n6. Thank you for pointing out relevant prior work. We will consider this on our revision. It is correct that the rejection experiments have already been conducted with unnormalized entropy. Our experiments aim at highlighting the interpretability of UCE/normalized entropy: Rejecting test samples where $ \\\\tilde{\\\\mathcal{H}} > 0.2 $ will result in a classification error $< 0.2$ for the remaining samples if the model is well-calibrated (see Fig. 9 & 2). We argue that using normalized entropy as uncertainty measure is as interpretable as max p, but avoids the pathologies of max p when used in a calibration metric. We will make this clearer in the text.\\n\\nThank you again for your detailed review. We hope to have taken all concerns into account and are currently working hard on the revision of our manuscript. We hope that we can convince you of our work and acknowledge an update of your rating. We are open for any further discussion.\\n\\nReferences\\n\\n*see paper*\"}",
"{\"title\": \"Benefits of UCE\", \"comment\": \"Dear AnonReviewer3, thank you for recognizing the importance of our work and for your constructive feedback. In the following we will address your concerns point by point.\\n\\nAd 1.: First, we want to stress out that the use of UCE as a regularizer is not our main finding, but rather an interesting fact. UCE regularization works best when computed classwise (in similar manner to ACE). Consider the following example: A batch with mainly samples from class 1 and few samples from class 2 are all predicted as class 1 with high confidence. Increasing the confidence of all predictions further reduces the NLL, whereas UCE is only reduced if the confidence of the overconfidently false predictions is reduced. Moreover, when compared at optimal temperature (as suggested by Ashukha et al. (2020)), UCE regularization is at least as good as MMCE reg. and outperforms entropy reg. (see Fig. 6 in appendix). We will add a table with metric values at optimal temperature to the main text to highlight the benefits of UCE regularization.\\n\\nAd 2.: Thank you for your question. It is correct that the rejection experiments could have been conducted with vanilla entropy. Our experiments aim at highlighting the interpretability of UCE/normalized entropy: Rejecting test samples where $ \\\\tilde{\\\\mathcal{H}} > 0.2 $ will result in a classification error $< 0.2$ for the remaining samples if the model is well-calibrated (see Fig. 9 & 2). We argue that using normalized entropy as uncertainty measure is as interpretable as max p, but avoids the pathologies of max p when used in a calibration metric. We will make this clearer in the text.\\n\\nAd 3.: Thank you for pointing that out. The figure was created by incrementing the number of bins in steps of 5 from 5 to 100. We will recreate this figure using an increment of 1 and a smaller y axis range. Small fluctuations of the UCE values should then become visible. However, this does not change our finding that UCE is not sensitive to the number of bins and provides a consistent ranking of the models.\\n\\nAd 4.: This is an interesting comment, indeed. For very large-class problems, max p based metrics and our metric should be equivalent, but max p based metrics are computationally more efficient. We will follow your suggestion and discuss this in our upcoming revision.\\n\\nAd 5.: Thank you for your assessment of our experiments. We already conducted additional toy experiments that better highlight the benefits of UCE. The experiments show that UCE can measure miscalibration where the other metrics fail. We are currently working hard to also include an active learning experiment. We hope that this will convince you of our work.\\n\\nWe hopefully have considered all of your concerns and welcome any further discussion.\\n\\nReferences\\n\\n*see paper*\"}",
"{\"title\": \"Relevant Work\", \"comment\": \"Dear Jize Zhang, thank you for pointing out your highly relevant work. We will review your paper and consider it in our upcoming manuscript.\"}",
"{\"title\": \"Reducing Confusion\", \"comment\": \"Dear AnonReviewer1, we thank you very much for your thorough review of our work and the positive comments. In the following we try to address all your concerns point by point.\\n\\n> If found the experiments to be well aligned to evaluate the approach, although limited in terms of dataset used (only CIFAR-10 and CIFAR-100), a greater variety of datasets would be more convincing in the of overall good performances of the approach, especially if datasets with a varied number of classes can be tested.\\n\\nThank you for mentioning our limited use of data sets. As of writing this, we conduct additional experiments on SVHN and Fashion-MNIST and provide the results in our revised manuscript. We expect these results to be aligned with the results on CIFAR-10/100.\\n\\n> Moreover, looking at the results in detail (Table 1), UCE does not appear to be particularly strong, having a worse calibration than ECE and ACE on CIFAR-10, but slightly better on CIFAR-100, assuming that we want it to be increased to reach the real error rate obtained.\\n\\nAfter reading your comment, we realized that we do not present our results clearly and comprehensibly. Table 1 shows the considered metrics on uncalibrated models and we do not expect any metric to reflect the model error at this point. Moreover, we argue that for well-calibrated models, the normalized entropy (as notion of uncertainty) should reflect the model error. We do not argue that the value of the metric itself reflects the error; it rather shows the deviation of our assumption of perfect calibration, see Eq. (22). We will add details to the caption of Table 1 and rewrite the sentences discussing the results in the main text. Thank you very much for drawing our attention to this.\\n\\n> Moreover, the presentation of the results in Table 1 is messy: it gets difficult to match the calibration error with the accuracy, providing the classification error instead of accuracy would help to make a direct comparison with calibration error.\\n\\nWe think that this issue is mainly addressed above. We provide the accuracy as we do not expect the calibration metrics to equal the error in Table 1; and providing the classification error could further add confusion. However, we will rework Table 1 as suggested by AnonReviewer3 and hope that this, in addition to a more detailed description of the results, will meet your expectations.\\n\\n> Moreover, why the last two columns in Table 1 (Brier and NLL) are provided as floating-point values instead of percentages as with the other columns. That's unnecessary confusion that should be fixed.\\n\\nThank you for pointing that out. We mainly followed related work where ECE-like metrics were provided as percentages, and Brier and NLL were provided as floating-point values, e.g. see (Kumar et al., 2018). Moreover, as pointed out by AnonReviewer4, NLL and Brier are strictly proper scoring rules and have to be decomposed in order to be directly comparable to other calibration metrics. We will visually seperate the NLL and Brier scores from the other calibration metrics using a vertical bar to reduce this confusion (see also answer to AnonReviewer4).\\n\\n> Conversely, I am not sure of the relevance of providing all the detailed information on Bayesian methods in the second part of Sec. 2. It can be presented in a more concise way, as it uses a lot of space to explain well-known approaches.\\n\\nThank you for this suggestion. We will shorten the description of the Bayesian methods and use the free space for the results of the new experiments.\\n\\n> In terms of potential impact of that paper, I still need to be convinced. What can tell me that this is just not yet another calibration metric. I think that the paper can have been made stronger on that aspect.\\n\\nMany recent papers have highlighted the need for an appropriate calibration metric (see our response to AnonReviewer2). Our metric reliably detects miscalibration as it avoids various pathologies of other metrics. We are convinced that our work is a valuable contribution to the community. To respond to your comment, we will rephrase the conclusion of our paper to make it stronger. In addition to the expected results from the new experiments, we hope to convince you and would be very grateful if you would update your rating accordingly.\\n\\nReferences\\n\\n*see paper*\"}",
"{\"title\": \"Related work using KDE to mitigate the bias/binning issues in calibration error estimation\", \"comment\": \"Please check out our recent work on the use of KDE-based ECE estimator [1]. By replacing histogram with KDE, we provide a more reliable evaluation of the calibration error while mitigating the bias & binning sensitivity of existing histogram ECE estimators. The code is also available online.\\n\\n[1] Jize Zhang, Bhavya Kailkhura, and T Han. \\\"Mix-n-Match: Ensemble and compositional methods for uncertainty calibration in deep learning.\\\", ICML 2020, https://arxiv.org/pdf/2003.07329.pdf\"}",
"{\"title\": \"General Response\", \"comment\": \"We thank all reviewers for their valuable feedback, as it helps us to improve our paper considerably. We are currently conducting additional experiments as requested by the reviewers and will update the manuscript accordingly. We welcome an open discussion and are working hard to address all issues raised.\"}",
"{\"title\": \"Regarding novelty\", \"comment\": \"Dear AnonReviewer2, thank you very much for your valuable feedback. In the following, we try to address every raised concern and hope to meet your expectations.\\n\\nThank you for pointing out two relevant papers, one of which we have already taken into account. We will review the other and consider it in our manuscript. Your main concern seems to be the lack of novelty of our contribution. We do not propose the use of (normalized) entropy for measuring uncertainty as sole contribution, since this has been extensively studied in the papers you mentioned. Rather, the proposed novelty is a metric for measuring calibration (of a classification model) based on normalized entropy. Recent well-recognized papers have highlighted the lack of a suitable calibration metric and we aim to address this issue (Nixon et al., 2019; Ashukha et al., 2020; Kull et al., 2019; Kumar et al., 2019). The perfect calibration metric has yet to be found, and we believe to make a valuable contribution towards it. Our metric avoids pathologies of other metrics and has several favorable properties for measuring calibration (see p. 5). We hope that we have met your requirement for novelty.\\n\\nThe contribution of Figure 3 is to show that for calibrated models, normalized entropy correlates with top-1 error, thus additionally providing an empirical justification for the use of normalized entropy in our metric (and definition of perfect calibration). The top-1 error decreases monotonically with the normalized entropy. Moreover, the results of Fig. 3 are novel, as Lakshminarayanan et al. (2017) only used max p for their rejection experiments. In our results, Bayesian methods perform much better as reported by Lakshminarayanan et al. (2017) and yield a top-1 error close to 0 for predictions with uncertainty < 0.1. We will add sentences to our manuscript for clarification.\\n\\nThe results reported in Table 1 were produced using only NLL/cross-entropy loss. We will add the unregularized baseline to Table 2 for better comparison. Many thanks for this advice!\\n\\nWe did not consider quantile ECE and will gladly include this in our revision. However, we did not find any related work describing quantile ECE in more detail. Can you point out a reference or describe quantile ECE briefly and how it differs from ACE?\\n\\nWe hope that we have addressed all your concerns and are grateful if you update your rating of our work accordingly.\\n\\nReferences\\n\\n*see paper*\"}",
"{\"title\": \"Review\", \"review\": [\"Thanks for the interesting paper!\", \"Summary\", \"The authors focus on the important problem of improved calibration measures as compared to the (now fairly standard) expected calibration error (ECE). More specifically, they define a new \\\"Uncertainty Calibration Error\\\" (UCE) metric based on the normalized entropy of the predictive distribution, rather than the max probability (as in ECE). The metric still uses fixed-width binning (as in ECE), and they motivate the interpretation w.r.t. perfect calibration based on a theoretical limit. They provide a set of experiments to show the differences in model ranking, sensitivity to number of bins, etc. between UCE, ECE, etc. on various models.\", \"Strengths\", \"As noted in previous literature (and referenced in this paper), improved measures of calibration is an important research area.\", \"The authors provide a great background section to place their research within the broader area of uncertainty research.\", \"The experiments on sensitivity to the number of bins is informative and highly relevant in the context of previous literature (e.g., Nixon et al. (2019)) where it has been shown that ECE is particularly sensitive to this setting.\", \"Weaknesses\", \"As I have noted in more detail down below, I believe this paper suffers from a few weaknesses. Overall, at the end of the paper as a reader, I'm still left with questions of whether UCE is truly a better calibration metric. As noted above, the insensitivity to number of bins is great and an improvement on ECE and ACE. However, I don't believe the remaining experiments make a strong case that the metric (1) provides a better measure of calibration, (2) yields consistently improved model performance when used as a regularizer (though it's interesting that it can be used as one!), or (3) allows for improved model selection. Additionally, I believe its interpretability is limited. As noted below, the experiments have mixed results or make comparison claims that detract from the overall message, which I find troubling from an experimental rigor standpoint. Furthermore, I think this paper would benefit from experiments that are set up such that they can directly measure and compare the ability of the metrics to measure calibration error.\", \"Recommendation\", \"Given the above strengths and weaknesses, I'm currently inclined to suggest rejection of the paper in its current form. However, I think this could be a great paper and as a community I don't believe we have yet to devise the perfect calibration metric -- perhaps this could be it! I would highly recommend the authors push on the points above.\", \"Additional comments\", \"p. 3, 4: It could be informative to includes notes about NLL and Brier score being strictly proper scoring rules (Gneiting & Raftery, 2007; Parmigiani & Inoue, 2009) that theoretically should be maximized only when the forecaster emits the distribution they believe to be true, and thus should, in theory, be well-calibrated asymptotically. However, we indeed know from Guo et al. (2017) that empirically, models can still overfit, leading to poor calibration.\", \"p. 4: The definition of perfect calibration can be traced back to Brier (1950), and, unlike ECE, is not limited to only the max predicted probability. Rather, for any predicted probability $p_k$ for class $k$, the probability that the true class is class $k$ should be equal to $p_k$ for all $p_k$ and all $k$. That is, $\\\\mathbb{P}(Y = k | P_k = p_k) = p_k, \\\\forall p_k \\\\in [0, 1], \\\\forall k \\\\in [1, K]$.\", \"p. 5: UCE is based on an argument that normalized entropy approaches the top-1 error in the limit of number of class going to infinity. While this is interesting theoretically, this assumption seems too strong for empirical settings, and I think this affects the interpretability of the metric as claimed in the conclusion.\", \"p. 6, 7, Section 5.1: This section (and Figures 4 & 5 in the appendix) make claims that NLL and Brier score \\\"fail at comparing calibration of models with different accuracy, as the metrics are always lower for models with better accuracy\\\". I find this argument both surprising and confusing in terms of motivation. As strictly proper scoring rules, they should indeed have lower values for better probabilistic models. Although accuracy is a non-proper scoring rule, it should still correlate well with those strictly proper rules, so it is expected that the better models with lower NLL / Brier score will (typically with some variance) have higher accuracy (variance being due to the non-proper nature of accuracy). All strictly proper scoring rules can be decomposed into calibration and refinement terms (Parmigiani & Inoue, 2009; DeGroot & Fienberg, 1983), but in the non-decomposed setting, it is not expected that these rules would directly measure calibration. Therefore, given the focus on calibration measures, I'm confused as to the motivation behind comparing to NLL & Brier score directly (beyond the overconfidence analysis from Guo et al. (2017)) as a means of motivating the usefulness of UCE.\", \"p. 8, 12: In the regularization experiment, different regularization approaches are being compared in terms of calibration, but it's difficult to assess the results. In Table 2 (which needs an additional entry for the non-regularized result from Table 1), UCE regularization appears to improve accuracy, NLL, and Brier score over the non-regularized baseline. Interestingly though, NLL, Brier score, ECE, ACE, UCE, and MMCE (i.e, all metrics other than accuracy) point towards entropy-regularization being superior. It does result in a lower accuracy than UCE reg and the baseline, but by the other metrics (including the strictly proper scoring rules NLL and Brier score), it produces a better probabilistic model. For CIFAR-10 UCE reg is worse than the other regularization methods and the baseline.\", \"p. 8: Rejection & OOD Detection: This has been studied previously for unnormalized entropy, which should yield the same results. See, e.g., Malinin & Gales (2018), Ren et al. (2019).\", \"Minor\", \"p. 3: s/as non Bayesian/as a non-Bayesian/\", \"p. 7: Figure 1 is too small.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Well-motivated metric for uncertainty calibration; novelty is unclear\", \"review\": \"Update: After reading the other reviews and responses, and in light of the authors' updates to the paper, I have increased my score to a 6.\\n\\nThis paper proposes a new metric for uncertainty calibration, based on comparing the entropy of the marginal class probabilities conditioned on predicted class with the entropy of the predicted probabilities. The metric avoids the failure mode of ECE, where predicting the relative frequencies of classes results in perfect calibration, and can be used as a regularizer in a loss function. The paper demonstrates that regularization with UCE yields better-calibrated uncertainty on CIFAR predictions without sacrificing accuracy.\\n\\nThe paper is well-written and well-motivated. I\\u2019m uncertain as to its novelty. In particular, entropy as a basis for uncertainty estimation is well-explored (and was used as a baseline in (Lakshminarayanan et al., 2017) as well as Jie Ren et al., \\u201cLikelihood ratios for out-of-distribution detection,\\u201d NEURIPS 2017). It\\u2019s unclear what the results in Figure 3 contribute in light of these baselines (besides the normalization by the constant C).\\n\\nWhich loss function was used to produce the results in Table 1? If it\\u2019s the loss in (25), it would also be useful to see calibration metrics for NLL loss alone.\\n\\nFigure 2 shows strong sensitivity of ACE to the number of bins. Quantile ECE (an ECE metric with bins defined by quantiles instead of fixed-width) often shows less sensitivity -- was this metric considered as well?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Good proposal but an enhanced set of experiments are required\", \"review\": \"This paper proposes a new calibration error measurement named UCE (Uncertainty Calibration Error) for deep classification models. It consists in doing a calibration in order to achieve \\\"perfect calibration\\\" (i.e., the uncertainty provided is equivalent to the classification error at all levels in [0, 1]), relying on normalized entropy for multiclass classification. This UCE is well justified for classification problems with several classes to process, where the entropy is demonstrated to be asymptotically equivalent to the classification (top-1) error. A point with this UCE metric is that is has some interpretability properties in terms of its value, and is said to be robust to the number of bins used.\\n\\nThe proposed metric is well explained, and justified, although I am wondering how well stands the assumption that the normalized entropy approaches the top-1 error for reasonable number of classes (e.g. C=10, as with CIFAR-10, or C=100, as with CIFAR-100). The properties presented are interesting.\\n\\nIf found the experiments to be well aligned to evaluate the approach, although limited in terms of dataset used (only CIFAR-10 and CIFAR-100), a greater variety of datasets would be more convincing in the of overall good performances of the approach, especially if datasets with a varied number of classes can be tested. Moreover, looking at the results in detail (Table 1), UCE does not appear to be particularly strong, having a worse calibration than ECE and ACE on CIFAR-10, but slightly better on CIFAR-100, assuming that we want it to be increased to reach the real error rate obtained. Moreover, the presentation of the results in Table 1 is messy: it gets difficult to match the calibration error with the accuracy, providing the classification error instead of accuracy would help to make a direct comparison with calibration error. Moreover, why the last two columns in Table 1 (Brier and NLL) are provided as floating-point values instead of percentages as with the other columns. That's unnecessary confusion that should be fixed.\\n\\nOverall, I found the paper to be correct, relatively well written. I think that more room should have been given to experimentations, like with other datasets and with more space for OoD rejection and detection. Conversely, I am not sure of the relevance of providing all the detailed information on Bayesian methods in the second part of Sec. 2. It can be presented in a more concise way, as it uses a lot of space to explain well-known approaches.\\n\\nIn terms of potential impact of that paper, I still need to be convinced. What can tell me that this is just not yet another calibration metric. I think that the paper can have been made stronger on that aspect.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Reviewer3\", \"review\": \"The work addresses an important problem in the study of uncertainty estimation: how does one compare model uncertainty at differing accuracy levels? The work proposes a novel uncertainty metric, relates this to existing methods and provides robust evaluation of the various merits of this approach. The paper is easy to follow.\", \"i_have_the_following_concerns_with_the_work\": \"1. Regarding the use of UCE as a regularizer: how is the described behaviour of UCE different from the NLL loss? The NLL loss should penalize highly confident incorrect predictions and strives for confident predictions in high accuracy. What does the UCE regularizer add here? Can table 2 include a non-regularized baseline as well to study this? In appendix A.3. it is said that UCE performs on par without regularization; then what is the point of proposing UCE as a regularizer?\\n2. What is the point of proposing the use of the normalized entropy as a thresholding factor for OOD detection? It seems that vanilla entropy would behave exactly the same. Is this considered to be a novel contribution in this work? \\n3. Why is figure 2 (right) completely flat for UCE? Are there values not shown here where calibration error does change? Perhaps this should be included in the plot. \\n4. Am I correct to say that max-p-based metrics might be preferable in very large-class problems such as language models? The paper does not discuss the computational tradeoffs of the method, and I believe this should be included. \\n5. It appears that the experiment section does not provide much evidence that this metric is favourable in selecting the best model for a downstream task where uncertainty is needed. This could be evaluated by e.g. an active learning problem. I believe it makes sense to include such an experiment. Right now, Table 1 and the accompanying discussion does not convince me that UCE is somehow more beneficial.\\n\\nOverall, the work has merit and of interest to the community. However, the proposal of the use of the metric as a regulariser and a OOD scoring function seems unproductive and if so, distracts from the core contribution. This core contribution is understudied in the work. The work would benefit from more analysis into the computational tradeoffs, and evaluation of the signal that the proposed metric provides on model selection for downstream uncertainty tasks.\", \"nitpick\": [\"Table 1 could benefit from a vertical bar between the two datasets to clarify that the numbers are not comparable.\"], \"update\": \"Although the paper has improved, I still vote for rejection. The new insight of binary-classwise v/s multiclass UCE as a regularizer seems poorly explored in the paper and would benefit from closer study. This appears to be the basis of the improved results in table 1.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
aUX5Plaq7Oy | Learning continuous-time PDEs from sparse data with graph neural networks | [
"Valerii Iakovlev",
"Markus Heinonen",
"Harri Lähdesmäki"
] | The behavior of many dynamical systems follow complex, yet still unknown partial differential equations (PDEs). While several machine learning methods have been proposed to learn PDEs directly from data, previous methods are limited to discrete-time approximations or make the limiting assumption of the observations arriving at regular grids. We propose a general continuous-time differential model for dynamical systems whose governing equations are parameterized by message passing graph neural networks. The model admits arbitrary space and time discretizations, which removes constraints on the locations of observation points and time intervals between the observations. The model is trained with continuous-time adjoint method enabling efficient neural PDE inference. We demonstrate the model's ability to work with unstructured grids, arbitrary time steps, and noisy observations. We compare our method with existing approaches on several well-known physical systems that involve first and higher-order PDEs with state-of-the-art predictive performance. | [
"dynamical systems",
"partial differential equations",
"PDEs",
"graph neural networks",
"continuous time"
] | Accept (Poster) | https://openreview.net/pdf?id=aUX5Plaq7Oy | https://openreview.net/forum?id=aUX5Plaq7Oy | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"PBDoIo2MPKd",
"NtE-lMBXQ05",
"5br7KpuQBne",
"aMS0PzlLyf",
"ERZb4cbmEjU",
"eZ02lM2LOoF",
"-PP9rXGojlV",
"qy2IdckgmEi",
"HtZcGS1JAdq",
"Z1GPwkjCPk",
"i0rq250_F3S",
"TF_ZVnO4vwR",
"LcfajJJFJGv",
"BlGjecSyHcO"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040431482,
1606186520339,
1606167465292,
1606155854044,
1606126202072,
1605881179544,
1605880360755,
1605880255623,
1605880140301,
1605879977396,
1604182528911,
1603990082270,
1603948362352,
1603809944029
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3532/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3532/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3532/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3532/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3532/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3532/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3532/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3532/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3532/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3532/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3532/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3532/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3532/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Poster)\", \"comment\": \"This paper proposes a new method for learning a model for spatio-temporal data described by an (unknown) spatio-temporal PDE. The model learns a continuous time PDE using the adjunct method and uses graph networks to perform message passing between different discrete time steps on a grid obtained with Delaunay triangulation.\\n\\nThe method initially 3 favorable and 1 unfavorable ratings, but convincing responses to some of the raised issues led to unanimous recommendations for acceptance (not all reviewer feedback after the rebuttal has been made public, but feedback has been made to the privately AC on these issues by different reviewers). \\n\\nThe reviewers appreciated novelty of the method and numerous ablations.\\n\\nInitially perceived weaknesses were some key experiments on generalization over different grid discretizations; the simplicity of some experiments, and links to different prior art - many of these points have been dealt with by authors in their response.\\n\\nThe AC concurs and proposes acceptance.\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for your through response.\\n\\nThe answers 2 to 5 have clarified all of the respective questions I had made.\\n\\nFor question 1, though your answer was clear, I still have slight reservations with respect to the novelty of the paper with respect to GNODE, even after considering your response here, since the contributions described in the answer amount to a simple (though effective) modification to the previous method and a new motivation. Nevertheless, despite the possibly incremental updates with respect to previous work, your results are still stronger than the ones seen in GNODE. Moreover, the other reviewers seem to be content with the novelty of the paper as this was not an issue brought to attention by the others. Therefore, taking all of these factors into account, I will update my score to a \\\"marginally above acceptance threshold\\\".\"}",
"{\"title\": \"re-response\", \"comment\": \"Q1/A1, thanks for the highlights\\n\\nQ2/A2, thanks for the prompt response and extra table.\\n\\nI am pleased with the edits and additional results and will update my score.\"}",
"{\"title\": \"response\", \"comment\": \"Thank you for the clarifications.\\n\\n**Q1**: I would have liked to see the updated description of setting and thought application cases, with clearer specification where it wouldn't work. Otherwise, the claims feel too broad, and not supported by the manuscript. \\n**A1**: We updated the manuscript and added the required comments in green (Section 2 and 3.1). The update in Section 2 includes all the assumptions we make about the data and the dynamical system that we learn.\\n\\n**Q2**: Q: I feel the key experiment, would be to test transfer of a learned model on some inputs, to new inputs (different grid position, different number of nodes) and not just always retraining on different grids. \\n**A2**: Indeed, this is a very important property of the model and this is exactly what we do in all of our experiments. We randomly downsample train and test data from high fidelity simulations so the grids for train and test simulations are different (different node positions, constant number of nodes). We added this detail to experiment descriptions as well.\\n\\nOur model is not trained for a specific number of nodes and will work well on any grid with neighborhoods similar to the ones on which the model was trained. For example figure 2b shows a grid with 750 nodes. As can be seen, there are neighborhoods of various sizes ranging from large to small ones, so the model trained on grids with 750 nodes should generalize fairly well to grids with 1500 and 3000 nodes (but not vice versa). We demonstrate this in the table below.\\n\\n| grid\\\\model | 3000 | 1500 | 750 |\\n|:----------:|:------:|:------:|:------:|\\n| 3000 | 0.0136 | 0.0286| 0.0321 |\\n| 1500 | 0.0468 | 0.0322 | 0.0345 |\\n| 750 | 0.1201 | 0.0954 | 0.0717 |\\n\\nHere we take models trained on 3000, 1500 and 750 nodes and evaluate their mean relative errors on test sets with 3000, 1500 and 750 nodes. As expected, the model trained on 3000 nodes generalizes poorly to coarser grids while the model trained on 750 grids performs fairly well on all grids. The reason for that is that grids with 750 nodes contain neighborhoods of various sizes. We believe that this is an important experiment which is missing from our manuscript, so we will add it to the revised version.\\n\\nThe model trained on 750 nodes performs significantly better on test data with 3000 nodes than with 750 nodes. This is because the finer grid allows to make more accurate predictions, therefore the error does not grow as large as for the coarse grid with 750 nodes.\"}",
"{\"title\": \"response\", \"comment\": \"Thanks for the authors' detailed response.\\nQ3/A3: my bad, misunderstanding on my side.\\n\\nQ1/A1: I would have liked to see the updated description of setting and thought application cases,\\nwith clearer specification where it wouldn't work.\\nOtherwise, the claims feel too broad, and not supported by the manuscript.\\n\\nQ5/A5: I might have not conveyed that in my initial review but I feel the key experiment, would be to test transfer of a learned model on some inputs, to new inputs (different grid position, different number of nodes) and not just always retraining on different grids.\\nThanks for the clarification regarding the reason for the additive noise experiment.\\nJust a comment here, but you could use a more principled approach (generative approach as for example in sec 5, neural ode paper) to deal with noise, rather than hoping the model would still fit in the presence of errors.\\n\\n\\nOverall, I like the method but I maintain some of my initial concerns on description\\nand evaluation (choice of experiments rather than methodology).\"}",
"{\"title\": \"continued\", \"comment\": \"Different grid structures: most previous methods have assumed that the data is collected from a regular grid, whereas our method works with arbitrary measurement points/grids.\", \"different_measurement_time_intervals\": \"Our method is not affected by different measurement time intervals because our model is continuous-time and thus the system state can be evaluated at any time point. Moreover, our method is the only method that can infer unknown PDEs from arbitrary measurement points collected at arbitrary time points.\", \"varying_amount_of_additive_noise\": \"As was mentioned previously, the function $\\\\hat{F}$ considers only its immediate neighbourhood. Therefore, its predictions could be very sensitive to noise as it does not have access to global information that could help to cancel the noise. We reported prediction accuracy for varying amounts of noise.\\n\\n**References**:\\n\\n[1] Shin et al. \\\"On the convergence and generalization of physics informed neural networks.\\\" (2020). \\n[2] Raissi et al. \\\"Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations.\\\" (2019) \\n[3] Bhattacharya et al. \\\"Model reduction and neural networks for parametric pdes.\\\" (2020). \\n[4] Kutyniok et al. \\\"A theoretical analysis of deep neural networks and parametric PDEs.\\\" (2019).\"}",
"{\"title\": \"Answer to AnonReviewer3\", \"comment\": \"**Q1**: For the PDE-net comparison, I was wondering how the 3 time step sizes were incorporated into the PDE-net. Isn't this \\\"by-default\\\" a fixed time step architecture?\\n**A1**: Indeed, this information should have been included in the experiment description. We used separate models for each time step.\\n\\n**Q2**: I also have to admit I couldnt follow the argumentation why graph-nets, e.g., from Sanchez-Gonalez et al. '18 aren't suitable in the context of the paper. As a plus, they wouldn't require a Delaunay triangulation or similar meshing step. \\n**A2**: The model from Sanchez-Gonzalez et al. perhaps could be adapted to learning PDEs. However, it was designed for physical systems of discrete agents (e.g. cartpole, pendulum, stick figures, etc.), and its main contributions were in modelling the internal structure of the agent (i.e. how the joints of the legs are linked to the joints in torso, etc.). As such, it would require considerable amount of work to adapt it to learning PDEs. We tried to select most suitable models for our comparisons and found DPGN to be a more suitable and readily available graph-based discrete-time model to compare against.\\n\\nPlease note that meshing is a way of finding neighbors for each node and thus is a necessary step for any graph-based model, not only for ours.\\n\\nThe concept of PDEs with moving positions corresponds to the Lagrangian reference frame and is very interesting, but adapting the model to such moving PDE systems is not immediately obvious, and would require more research.\\n\\n**Q3**: While the ablation study is generally nice, I was missing one central component here: as the paper targets continuous time (it's highlighted in the title), I expected an evaluation regarding how much is gained from the introduction of continuous time. The larger time steps of Fig. 3 seem quite trivial, but what I instead hoped to see here was a comparison to a model that simply receives the chosen tilmestep dt as an additional input, and is trained in a supervised fashion with data from a few different time step sizes (i.e. non-continuous, but varying dt). I think it would be important to demonstrate the advantages of a continuous formulation. \\n**A3**: That's a great point. To the best of our knowledge, there are no previous neural network parameterized PDE models that are designed to be trained with a varying time step. This implies that currently available models would need to be augmented to have such capability. This could be done by explicitly adding dt as the input, but the only models that allow to do that are the ones which learn an evolution map (e.g. DPGN). Models that learn dynamics (ours) or mimic integration schemes (PDE-Net) cannot use dt for the model's predictions without changing the model structure.\\n\\nAs for a comparison of our model with a discrete-time model trained on different dts, it was to some extent done in Section 3.2 Figure 7 where DPGN was used with a fixed time step (which is the best case scenario for a discrete model with varying dts). If trained on a varying dt, DPGN would achieve at best similar performance as when trained with a fixed dt but with multiple restrictions. The model would be restricted to work on dts similar to dts from the train set and the maximum dt would have to be below some \\\"critical value\\\" so that the model does not become unstable (e.g. as for dt=0.02 and 0.04 in Figure 7). Continuous-time models, on the other hand, can be trained with arbitrary time steps, as long as they allow to capture the system's dynamics, and can be tested on arbitrary time steps without deterioration of the performance. Furthermore, the \\\"critical value\\\" of dt for such models is much larger than for discrete-time models (Figure 7).\\n\\nConsidering the above, we believe that all information that such an experiment would provide is already contained in the paper. However, we definitely see the need for clarifying this point and adding a similar discussing to the paper.\"}",
"{\"title\": \"Answer to AnonReviewer1\", \"comment\": \"**Q1**: Similar graph-based methods that used continuous-time to model differential equation dynamics had been previously presented, e.g. GNODE. The novelty of the proposed method might be limited.\\n**A1**: Indeed, GNODE follows a similar idea of using graph networks to represent interactions between objects. The difference is that the interactions are represented only through the adjacency matrix rather than through relative positions which are crucial for learning PDEs. Furthermore, our method is well motivated as a natural data-driven extension of the method of lines which is a general approach to solving PDEs. We show in \\\"Importance of relative positional information\\\" that our method can significantly improve GNODE in modeling PDEs.\\n\\n**Q2**: Difference between train and test sets. \\n**A2**: Thanks for pointing that out. We agree that it is difficult to grasp the diversity and differences between the train and test sets from the initial conditions alone so we included examples of train and test data in the appendices. The new figures 13-15 show that the dynamics regimes in the train set significantly differ from that in the test set but the model is still able to generalize beyond the training time horizon. \\n\\n**Q3**: The error in model rollouts over time seems to spike at the beginning and then quickly flatten out. It seems strange that errors would spike up initially and then not compound significantly over time. Do the authors have any intuition as to why this is the case? Is it maybe a consequence of the data samples reaching a sort of steady state after some time? \\n**A3**: This is an interesting point. First, the dynamics are slowing down towards the end, but they do not reach a steady state. We included extra plots that show this in the previous answer.\\n\\nIt seems that this error curve behavior is explained by the simulations having much faster changes occurring in the very early phases. For instance, in Figs 13-15 it's clear that the systems are changing a lot in the beginning, while becoming smoother towards the end. We quantified this effect by computing the average difference between two consecutive snapshots of the system's state which shows that the differences have larger magnitude in the beginning, which also implies more challenging fitting problem, and higher errors in the early phases.\\n\\nThe initial spike in the error also comes from the fact that at time t=0 the error is also 0, and the first temporal increment then sees the error jumping upwards to some positive value.\\n\\nFinally, we want to note that this kind of error plots are commonly reported in other works as well (See [1] fig 3+12, [2] fig 11+20), which also signifies that this is a common behavior in learning PDEs. \\n\\n**Q4**: How would the Delauney triangulation being employed deal with possible obstacles present in the spatial domain? \\n**A4**: You are correct, simple Delaunay triangulation that we use does not consider boundaries and could, for example, connect opposite edges of the airfoil. However, any mesh generator could be used to create a mesh from observation points, so there will not be any problems when switching to the one that is capable of including boundaries.\\n\\n**Q5**: What integrator is used for the experiments? \\n**A5**: We used a variable step size Adams method which is described in \\\"Solving Ordinary Differential Equations I - Nonstiff Problems (E. Hairer)\\\" Chapter III Section 5 and implemented here https://github.com/rtqichen/torchdiffeq/blob/5dbaf8585ff9e601889811b7a5859ccf87dc576a/torchdiffeq/_impl/adams.py.\\n\\n**References**: \\n[1] Long, Zichao, et al. \\\"Pde-net: Learning pdes from data.\\\" International Conference on Machine Learning. 2018. \\n[2] Geneva, Nicholas, and Nicholas Zabaras. \\\"Modeling the dynamics of PDE systems with physics-constrained deep auto-regressive networks.\\\" Journal of Computational Physics 403 (2020): 109056.\"}",
"{\"title\": \"Answer to AnonReviewer2\", \"comment\": \"**Q1**: For small time steps, however, PDE-Net is multiple orders of magnitude better than the present approach, a fact that should have been thoroughly discussed in the text rather than just mentioned, because the discrepancy is so large.\\n**A1**: This is a good point and we agree that a more thorough discussion of the reasons for the observed difference is required. The main reason for this difference is that we used 1-hop neighborhood for GNN-based models while 2-hop neighborhood (5x5 filters) was used for PDE-Net. When trained with 3x3 filters (1-hop neighborhood), performance of PDE-Net drops significantly as shown below.\\n\\nModel, Average mean relative error +/-std \\nPDE-Net (5x5), 0.00111 +/-0.00199 \\nPDE-Net (3x3), 0.00643 +/-0.00025 \\nDPGN, 0.02990 +/-0.01708 \\nOurs, 0.00407 +/-0.00114 \\n\\nWe did not use 3x3 filters for PDE-Net in our experiments due to significantly longer training times and absence of experiments with such filters in the original paper [1].\\n\\n**Q2**: All the experimental results are presented as, essentially, measures of the training error. \\n**A2**: No. Please note that all reported errors are test errors, which also includes all compared methods.\\n\\n**Q3**: My sense is that the point of this methodology would be to meaningfully extend the time-horizon over which a given PDE could be intergrated, so I would have liked to see how the integrator performed outside the test set. \\n**A3**: Correct, and that is essentially what we do. The time-horizon could be extended as much as possible as long as the model is sufficiently accurate for the new dynamics regimes and the errors do not compound too much. For testing, we selected the time-horizon to be 3 times larger than for training as a reasonable extension.\\n\\nFor instance, in the convection-diffusion system we train with observations until 0.2 seconds, while we forecast until 0.6 second to the future. The sections 3.1 and 3.2 show that we can accurately predict the 3x horizon. \\n\\n**Q4**: Sparsity is mentioned in the title and the \\\"contributions\\\" but appears basically nowhere in the main text. How does PDE-Net perform in the 2 and 4 time point cases? \\n**A4**: By sparse data we mean that information from some node or time point is missing completely, so we can exclude it which gives arbitrary spatial and temporal grids. That is, our model does not assume observations at regular spatial grids or at regular timepoints, but accepts both arbitrary spatial observation points and arbitrarily spaced (over time) measurements. Our experiments demonstrate applicability of the model in such cases. Yet, sparsity could also arise from missing information at a particular node at a particular time, which would correspond to partially observed state graphs. Extending the model to handle such cases is an interesting direction but showing that the model can work in this setting was not the goal of our work.\\n\\nPerformance of PDE-Net on 2 and 4 time points could be judged from Figure 7 where it was trained on 21, 11, and 5 time points that correspond to sampling intervals of 0.01, 0.02 and 0.04 sec, respectively. At 11 time points (0.02 sec) PDE-Net already becomes unstable and is considerably worse at 5 time points (0.04 sec). This means that PDE-net requires relatively dense data to work well, while our model is robust to much scarcer measurements.\\n\\n**References**: \\n[1] Long, Zichao, et al. \\\"Pde-net: Learning pdes from data.\\\" International Conference on Machine Learning. 2018.\"}",
"{\"title\": \"Answer to AnonReviewer4\", \"comment\": \"**Q1**: The paper does not describe well the setting in which it is applicable. Partially observed states.\\n\\n**A1**: Thank you for pointing that out. While we say that arbitrary spatial and time points can be used in the first paragraph of section 2, we agree that all details about the setting in which our method is applicable should be discussed in the same place to aid clarity. Having locally missing information on a single node is out of scope for our work, but this is an interesting open problem to be studied in future. We note that this kind of missing data potentially could be handled in a straightforward manner by masking the nodes with missing information when calculating the loss.\\nWe will revise our manuscript accordingly.\\n\\n**Q2**: Other approaches and guarantees.\\n\\n**A2**: Indeed, some methods, e.g. [2, 1, 3, 4], come with guarantees, but those are different lines of work concerned with either solving known PDEs or approximating solution maps of unknown PDEs. In contrary, the goal of our work is to learn unknown dynamical systems. Since we assume the system is governed by a PDE, it is reasonable to ask how accurately can we approximate the underlying PDE from observations only. In our setting there are two sources of approximation: discretization of the spatial domain, and approximation of the function $\\\\hat{F}$ that defines the dynamics. The former can be addressed by noting that approximation of partial derivatives present in PDEs improves as the grid becomes finer. The latter can be addressed using model-specific guarantees on what kind of functions the model (e.g. MLP, CNN or GNN) can learn.\\n\\n**Q3**: Local information and generalization to larger time steps.\\n\\n**A3**: We agree that using information from a larger neighborhood could be beneficial but please note that the time step is not used as our model's input, and our model does not depend on it. Instead, our model learns the (time-invariant) function $\\\\hat{F}$ and uses it to evolve the system's state forward with numerical solvers (eg. Euler or Runge-Kutta), which approximate the continuous-time dynamics. The time step for fixed solvers can be adjusted manually while time step for adaptive solvers is selected automatically. This ensures that the obtained solutions are accurate and the time steps that the solver makes are not too large.\\n\\nIf the forward solver is using too large step sizes, the system evolution will become unreliable. However, we note that our system is built on the assumption of accurate forward solving. Exploring the resiliency of the method to cruder forward solutions (e.g. to save computing resources, to scale to bigger systems) would be an interesting research topic.\\n\\n**Q4**: Removing positional information and adding noise.\\n\\n**A4**: We agree on both points. It was important to test the model on noisy data since the function $\\\\hat{F}$ is local, so noisy observations could significantly affect its output. Our results show that our method is indeed not sensitive to noise and thus demonstrate its applicability to real applications with noisy data. \\n\\nRemoving spatial information when having PDEs in mind is indeed not reasonable at all, but it was done as an ablation study and, as was shown, it might not affect some types of PDEs (diffusion) but can have a huge effect for other PDE types (convection). We note that some earlier approaches (eg. the GNODE) do not consider spatial locations, which motivated our ablation study.\\n\\n**Q5**: PDE connections and choice of experiments.\\n\\n**A5**: This is a good point. We want to emphasize that our modelling choices were carefully made to match PDE systems to produce a principled model to learn PDEs. That is, we base our model on a classical PDE solution technique; we utilize spatial information; we rely on accurate numerical solvers; the neural architecture is spatially stationary, but non-linear; among others.\\n\\nNext, due to the model still being a neural network (which can be finicky or have unexpected behavior), we performed a series of ablation studies to really explore the models behavior in various conditions ranging from different grid sizes, number of data, time step irregularities, amounts of noises, etc. We feel that these experiments are important to carve out the limits where our model can be reliably applied to learn realistic PDE from data, and when it starts to struggle (e.g. with too few data). Our experiments can then be seen as our best attempts to open up the black box.\", \"the_principles_behind_our_tests_are\": \"\", \"different_grid_sizes\": \"In classical PDE solution techniques the grid size affects how accurate the numerical solution is. Also, it is known that formulas for approximating spatial differentials (e.g. finite differences) become more accurate as the grid gets finer. These suggest that performance of our model should improve as we refine the grid.\"}",
"{\"title\": \"Interesting method, disappointing description and discussion\", \"review\": \"Review:\\n\\nThis papers proposes an algorithm to learn a model for spatio temporal data assumed to be described by a stationary spatio temporal PDE.\\n\\nData considered in the paper consists of vectors y(t) of observations at time t and a fixed set of spatial indices x.\\nA model for discrete vector y(t) is proposed in the form of coupled ODEs (one for each x_i) with a sparse coupling arising from a neighbouring graph on spatial inputs x, and sharing the same transition function. \\nThis transition acts on relative local spatial information and on the absolute function values in a neighbourhood.\\n\\nThe continuous time ODE can be solved using classing ODE solver. The model is fit to data using the adjoint method to backprop through the solution given a squared loss.\\nThe ability of the model to learn is evaluated on data generated from PDEs.\\n\\n+ves: \\n\\n+ The proposed model is sensible and the paper motivates and describes the method well.\\nThe obtained sparsity of the ODE coupling makes the method scalable. Preserving a continuous time dimension is useful \\nif data is irregularly sampled\\n\\n+ Once learned, the model is quite flexible -> adapts to new grids, adapts to various time intervals\", \"concerns\": \"- The paper does not describe well the setting in which it is applicable. Spatio temporal data could be observations at random space and time locations. One has to read all the details to understand this while it should be highlighed.\\nFor example, the data arrives in vectors of observations at input (x, t) for different t.\\nThe method would not be able to deal with missing data (a partial vector y(t)) etc.. , the method assumes stationarity.\\n\\n- The methods cite other approaches that take a different route to the problem (e.g. for example learning the parameters of PDE directly) such approaches have their scaling issues but inherit from half a century of research on approximate solutions to pdes, and come with guarantees. Does this method come with any guarantees? A discussion on this aspect would be useful.\\n\\n\\n- For infinitesimal time steps, it makes sense to use only local information to build the differential F.\\nBut for longer time steps (think diffusion) you would need information that span further away to get accurate results.\\nAlong these lines, how would a model trained on fine time grid deteriorate as you test it on data with bigger time steps\\n\\n- I m surprised of the experiment about removing the positional information. Removing it makes no sense when one has stationary PDEs in mind. Same with the noise: if you add noise, performance decreases.\\n\\n\\n\\nOverall, I like the method and think it can be very useful to members of the community, but I find the paper lack a broader perspective when describing it.\\nThe PDE connection could be used to discuss intuition on where it works or fails much more and to guide\\nthe model experiments and validation.\\nInstead, the current writing makes it sound a black box engineering solution has been proposed and is\\n tested with no real guiding principle.\\n\\nFor this reason, I am not a big proponent of the paper but do not oppose acceptance.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"PDE learning with few constraints\", \"review\": \"This submission proposes extensions of PDE-net that relax some constraints that could help extend the range of applications of this approach. First, rather than fixing a spatial discretization in the form of a grid, the authors use a Delaunay triangulation to represent the domain. The updates to the nodes of this triangulation are performed using a message-passing GNN framework which couples neighboring nodes. Secondly, the authors use a classical adjoint method to allow for arbitrary time-discretizations (though this may be much more expensive in practice).\\n\\nI am not an expert on the experimental side of this area, but my sense was that the performance of the approach is relatively good, especially compared to other methods when the time step is large. For small time steps, however, PDE-Net is multiple orders of magnitude better than the present approach, a fact that should have been thoroughly discussed in the text rather than just mentioned, because the discrepancy is so large. \\n\\nAll the experimental results are presented as, essentially, measures of the training error. My sense is that the point of this methodology would be to meaningfully extend the time-horizon over which a given PDE could be intergrated, so I would have liked to see how the integrator performed outside the test set. \\n\\nSparsity is mentioned in the title and the \\\"contributions\\\" but appears basically nowhere in the main text. I did not actually see in what regard the data was sparse or see a clear theoretical justification as to why the MPNN approach would be superior on sparse data. I suppose the argument is that with arbitrary spatial and time discretization, the method still able to be formulated whereas PDE-Net requires dense spatial discretizations and time discretizations to train. However, the fact that the method could in principle be used on sparse data is not a demonstration that it works on sparse data. How does PDE-Net perform in the 2 and 4 time point cases?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Review\", \"review\": \"- Summary\\n\\nThis paper presents a graph neural network model for learning to model PDE dynamical systems. Given its graph structure and its continuous time formulation (employing the adjoint method for differentiation/backpropagation), this method allows the usage of samples placed arbitrarily in space and time.\\n \\n\\n- Pros\\n\\nThe continuous-time nature of the model allows for the usage of irregular time sample points.\\n\\nPreviously proposed methods either would not work on continuous time, or unstructured grids, or would not be applicable to settings with unknown governing PDEs. This work combines all these features. \\n\\nThe graph-based representation used makes the proposed method invariant to translations and rotations in the spatial domain.\\n\\n\\n\\n- Cons\\n\\nSimilar graph-based methods that used continuous-time to model differential equation dynamics had been previously presented, e.g. GNODE. The novelty of the proposed method might be limited.\\n\\nThe test cases are simple and the experimental details are somewhat lacking for a full evaluation of the results (more details below in the additional comments). \\n\\n\\n\\n- Reasons for score\\n\\n[Edit: Score updated, see discussion below]\\n\\nOverall, given the \\\"cons\\\" described above, notably the potential lack of strong novelty in the proposed method, and the lacking experimental description and results, I am for now classifying this paper as marginally below the acceptance threshold. \\nOn the positive side, the method seems to perform favorably when compared to other baselines, in comparisons that are actually favorable to the other methods (e.g., using regular grids). Moreover, the method performs well on the tasks it is tested on.\\nHowever, I'm concerned with some uncertainties I have regarding the experimental section and the presented results. These are discussed below in the comments.\\nMoreover, the proposed method centers around using message-passing neural networks to model the differential equation dynamisc. As mentioned above, previous methods had already proposed the usage of graph neural networks with continuous time for the learning of differential equations, and I am not sure that the addition of spatial mesh information to such a graph neural network constitutes a significant enough modification at this point. \\nDespite the concerns above, I am open to reading the authors' responses and the reviews/comments and changing my opinion depending on how those affect my current uncertainty.\\n\\n \\n\\n- Additional comments\\n\\nI believe an important element that is missing from the description of the experiments is a clearer of how much do train and test set actually differ? This would be important to understand how hard the tasks being performed are. Clearly, if training and test set are too similar, the results lose a lot of power.\\nMorever, since we also dont see any traning vs test plots, it is also hard to see how much performance is different between these two are. (I am not claiming such a plot would be necessary, but merely that given the otherwise lack of information in this direction, it would be helpful information.) I am aware that the appendix includes a description of how the initial conditions for the data are generated, but lacking more information these are hard to grasp intuitively to be able to judge the tasks.\\n\\nMoreover, the error in model rollouts over time seems to spike at the beginning and then quickly flatten out. It seems strange that errors would spike up initially and then not compound significantly over time. Do the authors have any intuition as to why this is the case? Is it maybe a consequence of the data samples reaching a sort of steady state after some time? If so, wouldn't this weaken the case being made for a continuous-time model?\\n\\n\\nHow would the Delauney triangulation being employed deal with possible obstacles present in the spatial domain? For example, an airfoil might have its opposing boundaries connected by edges (since they are close in space), even though that would supposedly be a solid. Would these types of solids have to be manually specified when extending this method to such scenarios? (This is not a \\\"drawback\\\", of course, it would be expected of most methods that such object boundaries would have to be defined.)\\n\\n\\nWhat integrator is used for the experiments?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Nice evaluation of graph-based networks for PDEs with open questions on the side of continuous time\", \"review\": \"The paper proposes to use graph-based networks for evaluations of PDEs with continuous time formulations. In contrast to existing works on continuous time ODE formulations with graph structures, the proposed networks incorporate relative spatial information in order for the network to evaluate spatial derivatives in addition to the temporal dynamics. A key argument for this setup is the flexibility in spcae (via the graph nets) in addition to a variable time step.\\n\\nThe proposed setup is evaluated for the relatively simple PDE cases, a convection diffusion case, a pure diffusion case, and another advection diffusion case for transport. Despite the simplicity, the authors make an effort to illustrate the behavior of their method with an ablation study, and to compare to previous work. Here, they compare to PDE-net, which was originally proposed for system identification, and is used for predictions over time instead here. In addition, they compare to the GNODE approach by Poli et al., which omits the spatial information, but already incorporates ODEs into graph nets. For the latter, the authors demonstrate that for simple cases (pure diffusion) the GNODE approach does a good job to identify dynamics purely over time, while including advection terms significantly increases the error without spatial information. This is good to see and makes sense.\\n\\nFor the PDE-net comparison, I was wondering how the 3 time step sizes were incorporated into the PDE-net. Isn't this \\\"by-default\\\" a fixed time step architecture? Was it changed to receive the time step as an input, or does Figure 7 show 3 networks, i.e. one per timestep? The appendix unfortunately does not provide any additional details on how the comparison was executed. \\n\\nI also have to admit I couldnt follow the argumentation why graph-nets, e.g., from Sanchez-Gonalez et al. '18 aren't suitable in the context of the paper. Sure, they are evaluated on moving positions, but isn't that even more difficult compared to the static locations used here? So when keeping these positions fixed, wouldn't the networks potentially do an even better job than for the moving locations? As a plus, they wouldn't require a Delaunay triangulation or similar meshing step.\\n\\nWhile the ablation study is generally nice, I was missing one central component here: as the paper targets continuous time (it's highlighted in the title), I expected an evaluation regarding how much is gained from the introduction of continuous time. The larger time steps of Fig. 3 seem quite trivial, but what I instead hoped to see here was a comparison to a model that simply receives the chosen tilmestep dt as an additional input, and is trained in a supervised fashion with data from a few different time step sizes (i.e. non-continuous, but varying dt). This is maybe what was done for the PDE-net, but the paper is not clear here. I think it would be important to demonstrate the advantages of a continuous formulation, which introduces a significant amount for complexity, over a much simpler training with discrete but varying time steps. \\n\\nI hope the authors can shed light on this aspect during the rebuttal, as apart from this relatively central open question I like the paper. Thus, assuming that the authors can clarify this part and show that the proposed method yields benefits, I think this could be an interesting paper for ICLR. It presents an interesting evaluation of PDE-learning with graph-nets, which I would consider to be interesting for many ICLR attendees.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
CU0APx9LMaL | NAS-Bench-ASR: Reproducible Neural Architecture Search for Speech Recognition | [
"Abhinav Mehrotra",
"Alberto Gil C. P. Ramos",
"Sourav Bhattacharya",
"Łukasz Dudziak",
"Ravichander Vipperla",
"Thomas Chau",
"Mohamed S Abdelfattah",
"Samin Ishtiaq",
"Nicholas Donald Lane"
] | Powered by innovations in novel architecture design, noise tolerance techniques and increasing model capacity, Automatic Speech Recognition (ASR) has made giant strides in reducing word-error-rate over the past decade. ASR models are often trained with tens of thousand hours of high quality speech data to produce state-of-the-art (SOTA) results. Industry-scale ASR model training thus remains computationally heavy and time-consuming, and consequently has attracted little attention in adopting automatic techniques. On the other hand, Neural Architecture Search (NAS) has gained a lot of interest in the recent years thanks to its successes in discovering efficient architectures, often outperforming handcrafted alternatives. However, by changing the standard training process into a bi-level optimisation problem, NAS approaches often require significantly more time and computational power compared to single-model training, and at the same time increase complexity of the overall process. As a result, NAS has been predominately applied to problems which do not require as extensive training as ASR, and even then reproducibility of NAS algorithms is often problematic. Lately, a number of benchmark datasets has been introduced to address reproducibility issues by pro- viding NAS researchers with information about performance of different models obtained through exhaustive evaluation. However, these datasets focus mainly on computer vision and NLP tasks and thus suffer from limited coverage of application domains. In order to increase diversity in the existing NAS benchmarks, and at the same time provide systematic study of the effects of architectural choices for ASR, we release NAS-Bench-ASR – the first NAS benchmark for ASR models. The dataset consists of 8, 242 unique models trained on the TIMIT audio dataset for three different target epochs, and each starting from three different initializations. The dataset also includes runtime measurements of all the models on a diverse set of hardware platforms. Lastly, we show that identified good cell structures in our search space for TIMIT transfer well to a much larger LibriSpeech dataset. | [
"NAS",
"ASR",
"Benchmark"
] | Accept (Poster) | https://openreview.net/pdf?id=CU0APx9LMaL | https://openreview.net/forum?id=CU0APx9LMaL | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"1K1WgyRZMpI",
"Zc-Tvz7XGu",
"7Pyx440f_to",
"Vr2wLT0RpQu",
"5jG1OrVybV8",
"x6TNQcbgS0a",
"gFlzBT57rT6",
"4eEK9cl_O3m",
"s5PRproJfZn",
"ggqKgdRtRuU",
"994e4PAoyGE",
"pSAjgBCpuQ",
"4HcPPbtXgnp",
"aTrYxVkMOgo"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040429880,
1606158143291,
1606117507760,
1606109258432,
1606076057954,
1606070438939,
1606061111866,
1606059916295,
1606059513548,
1604605777102,
1604341774256,
1604068632770,
1603849456470,
1603478311388
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3528/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3528/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3528/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3528/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3528/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3528/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3528/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3528/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3528/AnonReviewer5"
],
[
"ICLR.cc/2021/Conference/Paper3528/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3528/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3528/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3528/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Poster)\", \"comment\": \"Good clarity: a NAS benchmark for ASR and results transferable across datasets. Although this is more specific to speech domain, building such a benchmark for speech is important for general NAS research, especially the papers finds different behaviors compared to image classification benchmarks.\\n\\nThe main factor for the decision is the clarity and importance for NAS in speech domain.\"}",
"{\"title\": \"Revised manuscript\", \"comment\": [\"Our revised manuscript addresses comments raised by the reviewers as following:\", \"Extended related work to discuss NAS work in ASR.\", \"Clarification about early exit, use of regularizers/dropouts, design choise for unidirectional lstm, and no use LM model.\", \"Comparison of correlations between test and validation accuracies reported in the existingimage classification benchmarks (NB1 and NB2) and ours NB-ASR.\"]}",
"{\"title\": \"It's not about being close to SOTA (part 1/2)\", \"comment\": [\"We thank the reviewer for his/her valuable comments and present our response below.\", \"Indeed, if evaluated purely in the context of beating/matching SOTA results, the results presented in our work might not look convincing. However, we think that limiting one\\u2019s judgment to this criteria alone is not fair as it misses a lot of benefits our work brings, due to a different context (beating SOTA vs. helping NAS/ASR research). To further explain our point of view, we\\u2019d like to present a list of observations made in the paper which we think are important, together with justification why we think the gap between SOTA and our results does not impact our contributions.\", \"Our models achieve accuracy much closer to the ones presented in the literature for similar models [3] than the numbers presented by the reviewer. Even though we agree that [3] does not represent the current SOTA, we think it is important to mention it because it shows that there is nothing fundamentally wrong with the way we train our models and our results are still representative for the chosen setting. To further tackle the problem of accuracy:\", \"Pre-training the models (as in [1]) is both infeasible, if done for all models, and incorrect as it would likely invalidate our transferability experiments. Answering the question whether training on TIMIT can be successfully used as a proxy for much larger LibriSpeech (~200x larger) tackles one of the core problems in NAS concerning finding cheaper ways of evaluating models - in that context, correlation of 0.87 is high enough in order to consider a methodology successful. For example, one of the SOTA NAS methods achieves correlation of 0.85 when learning the ordering of models in the NB2 search space [4]. Although the same paper also shows that higher correlation does not have to imply better NAS performance, in order for us to make any more specific comments we\\u2019d need to dive deep into technicalities of specific searching algorithms (as the degree in which different algorithms are sensitive to correlation can vary) which is way outside the scope of our work. However, we think that this alone serves as a good counter-argument to the claim that correlation of 0.87 is not very high. To provide further details about correlations of proxy tasks and their impact on NAS, we invite the reviewer to take a look at one of the concurrent submissions which happens to talk about this specific problem extensively, thus providing a good \\u201csurvey\\u201d [5].\", \"Compared to [2], our models don\\u2019t use attention-based CTC. We admit that including attention-based mechanisms in our search space would potentially make our work more appealing but we decided to ignore it in order to keep the search space size limited. Even though we do not consider attention, we'd argue that our findings are still relevant - see further comments for details why.\"]}",
"{\"title\": \"Clarification of missing points in the paper\", \"comment\": \"We would like to thank the reviewer for his/her time and effort in giving us valuable feedback. In the following we present our responses to his/her comments.\\n\\n**Re. 1. The best PER achieved in this paper, 21.1% is quite high in today's standard. In (Graves et al., 2013), the numbers are around 18%. The best WERs achieved in Figure 7 are high in the teens. In (Hsu et al., 2020), the number for training on the 100 hours of LibriSpeech is around 14%.**\\n\\nPlease see our reply to Reviewer-2.\\n\\n\\n**Re. 2. It is unclear if the paper uses any regularizer at all when training the models.**\\n\\nYes, we used L2 kernel regularizer with convolution layers and dropouts with dense layers. We will incorporate this information in the revised paper.\\n\\n**Re. 3. The discrepancy between the numbers in the paper and others makes me wonder the search over the cells is a wrong direction to begin with. Maybe it is the things held fixed that play the role of achieving the best numbers. For example, the macro architecture is held fixed, the optimizer is held fixed, the learning rate schedules are more or less fixed. The macro architecture might play a critical role here. Typically, the competitive architectures require many layers of LSTMs instead of one used in the paper. It is quite discouraging that the models, discovered by NAS after spending so much compute, are not competitive to baseline models reported in other papers.**\\n\\nBefore performing a search on micro-cells, we did a search for a good macro architecture, learning rate, and learning rate decay factor. In order to select good values of the parameters, we conducted a range of training experiments to find good macro structure parameters and optimizer settings. Please see Section 3.3 for more details.\"}",
"{\"title\": \"It's not about being close to SOTA (part 2/2)\", \"comment\": \"(continued from the previous comment)\\n\\n - The reviewer claims that it is hard to evaluate our results but we think that\\u2019s not true. Of course, were our results close to sota, it would be much easier to judge them, but even in the current shape we think there is plenty of arguments supporting our work:\\n - Firstly, we would like to emphasize that NAS benchmarks are all behind relevant SOTA levels (see the table below) - this is caused by the fact that their goal (just like ours) is not related to beating SOTA but rather to provide insights into the effects of different architectural choices on the final performance, and providing a robust way of accessing searching algorithms easily.\\n - In the context of the above, a NAS benchmark is valuable as long as the insights it provides are relevant for a particular domain(s). In that regard we\\u2019d like to point out that:\\n - the best architectures found in our search space match surprisingly well the ones found in manually designed SOTA models - that suggests that the results are meaningful as they do not obviously contradict what is currently known. \\n\\n - at the same time the results show that more efficient variations can be found, which is also expected considering the success of NAS in other domains. It is important to clarify that we do not claim that the best model in our search space is ultimately the best to use in any SOTA-oriented research but rather we\\u2019d argue that a NAS algorithm which is able to identify the best model in our search space quicker is more likely to be successful when used in the SOTA-oriented research - which is the point of a good NAS benchmark.\\n\\n - from a more NAS-oriented point of view, the insights about the number of skip connections on a model\\u2019s performance is both interesting and potentially relevant for differentiable search which is known to be sensitive to choices regarding skip-connections (addressed for example in [6], for more recent comment on the problem see for example section 3.2 from one of the concurrent submissions [7]). The fact that our observations are somewhat aligned with this relatively distant problem in NAS proves, in our opinion, that our benchmark is potentially useful and thus can be objectively judged as a good contribution regardless of the gap to SOTA.\\n\\nWe hope that the above response shows clearly that the contributions of our work are both relevant in the context of both NAS and ASR and remain valid even considering the fact that our accuracy values remain behind the current SOTA. Similar to other NAS benchmarks, we expect our work to fuel future ASR work by making it easier to use NAS and thus, directly, contributing to future gains in the SOTA-oriented research.\\n\\n\\n| | Best Reported Accuracy | Models from literature |\\n| -- | -- | -- |\\n| NAS-Bench-101 (2019) | 94.32 (CIFAR-10) | (see below) |\\n| NAS-Bench-201 (2020) | 94.37 (CIFAR-10) | 94.6 [8] (2017), 97.92 [9] (2019), 99.3 [10] (2019) |\\n| | 73.51 (CIFAR-100) | 82.82 [11] (2017) |\\n| | 47.31 (ImageNet-16-120) | |\\n| NAS-Bench-NLP (2020) | 78.5 (PTB, word level) | 78.4 [12] (2014), 31.3 [13] (2019) |\\n\\n**References:**\\n\\n[3] Zang et al. Towards End-to-End Speech Recognition with Deep Convolutional Neural Networks. 2017. \\n\\n[4] L. Dudziak et al. \\u201cBRP-NAS: Prediction-based NAS using GCNs.\\u201d NeurIPS (2020)\\n\\n[5] https://openreview.net/forum?id=0cmMMy8J5q\\n\\n[6] X. Chen et al. \\u201cProgressive Differentiable Architecture Search: Bridging the Depth Gap between Search and Evaluation.\\u201d ICCV (2019)\\n\\n[7] https://openreview.net/forum?id=PKubaeJkw3\\n\\n[8] E. Real et al. \\u201cLarge-Scale Evolution of Image Classifiers.\\u201d ICML (2017)\\n\\n[9] H. Cai et al. \\u201cProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware.\\u201d ICLR (2019)\\n\\n[10] L. Wang et al. \\u201cSample-Efficient Neural Architecture Search by Learning Action Space.\\u201d arXiv preprint arXiv:1906.06832 (2019)\\n\\n[11] G. Huang et al. \\u201cDensely Connected Convolutional Networks.\\u201d CVPR (2017)\\n\\n[12] W. Zaremba et al. \\u201cRecurrent Neural Network Regularization.\\u201d arXiv preprint \\tarXiv:1409.2329 (2017)\\n\\n[13] C. Wang et al. \\u201cLanguage Models with Transformers\\u201d arXiv preprint arXiv:1904.09408 (2019)\"}",
"{\"title\": \"Transferability to other speech domain\", \"comment\": \"We would like to thank the reviewer for his/her time and effort in giving us valuable feedback.\\n\\nWe do agree that scepticism about transferability is justified but, as hinted by the reviewer, we also believe that it does not undermine our current results. Even if limited by the scope of our work, we think our contributions are significant in the context of NAS for ASR.\"}",
"{\"title\": \"Addressing the questions\", \"comment\": \"We would like to thank the reviewer for his/her time and effort in giving us valuable feedback. In the following we present our responses to his/her comments.\\n\\n\\n**Re. 1. Why the design only uses unidirectional, rather than bidirectional LSTM? Is the paper focusing on on-device deployment?**\\n\\nYes, it is. We will make it clear in the revised paper.\\n\\n\\n**Re. 2. Regarding the performance on TIMIT.**\\n\\nPlease see our reply to Reviewer-2.\\n\\n\\n**Re. 3. According to the last paragraph in page 4, it seems that the authors logged the validation loss/PER for each epoch, but only logged the test metrics at the end of the training. So, the test performance is not coming from the best epoch, this might have some effect to the validation/test performance correlation study.**\\n\\n\\u201cfinal test PER\\u201d refers to the test PER of the best model according to its validation accuracy after 40 epochs of training - this is already mentioned in Section 4, third paragraph, second sentence. However, we understand that it\\u2019s easy to miss and we use the term even before it\\u2019s described in detail so we will make it clearer in the revised paper.\\n\\n**Re. 4. The authors claimed the transferability of the identified cell structure. The evidence is the correlation between Librispeech WER and TIMIT PER. To better support this claim, the authors may consider conduct study on some other corpus like SWITCHBOARD.**\\n\\nWe agree that more use cases would naturally strengthen our claims but we\\u2019d argue that our current scope is already sufficient to make a case for NAS in the ASR domain (please see our reply to Reviewer 5\\u2019s comment 4 for more details). In summary: the results are self-contained and were obtained in a rigorous manner, they answer important questions regarding usage of NAS for ASR, and they required an already significant amount of engineering work and compute resources - similar to related work published in top conferences.\"}",
"{\"title\": \"Other comments\", \"comment\": \"**Re. 1: Does the TIMIT experiment include a phone bi-gram model?**\\n\\nThe results presented in the paper are without a phone bi-gram language model (LM). \\nWe didn't use it in our experiments in order to avoid the confounding impact of LM and HPO for the LM, e.g., weighting factors. This helps to keep the architecture search for Acoustic Model (AM) tractable. Moreover, we did try including a LM for TIMIT in our initial experiments, but we observed the use of the bi-gram model trained with the TIMIT train set didn\\u2019t give a significant improvement in PER (i.e., only 0.2% absolute gain). We would add the details in the revised version of the paper.\\n\\n\\n**Re. 2: Figure 2 is an interesting finding. The paper says that it is different behavior compared with image classification benchmarks. Please discuss it by referring to the report about the image classification benchmarks.**\\n\\nWill do in the revised version. Please see the table below which highlights the differences in terms of Spearman-r correlation:\\n\\n\\n\\n| | NB1, CIFAR-10 | NB2, CIFAR-10 | NB2, CIFAR-100 | NB2, ImageNet-16 | NB-ASR, TIMIT (ours) |\\n|---|---|---|---|---|---|\\n| **Top 1000** | 0.356* | 0.855 | 0.613 | 0.827 | 0.210 |\\n| **Overall** | 0.989 | 0.993 | 0.987 | 0.997 | 0.852 |\\n\\n\\\\* note that NB1 contains much more models so the top 1000 models represent much lower percentage. From our observations this makes top 1000 correlation lower (i.e., if we took fewer models in other cases, to match percentages, correlation would also be lower for them as well) and is reflected by much higher discrepancy between Top 1000 and overall correlations compared to other cases\"}",
"{\"title\": \"Addressing weakness\", \"comment\": \"We would like to thank the reviewer for his/her time and effort in giving us valuable feedback. In the following we present our responses to his/her comments.\\n\\n\\n**Re. 1: The problem is too specific to ASR.**\\n\\nAlthough our work focuses mainly on ASR, we believe that our efforts in incorporating NAS in the ASR domain bridges an important gap in the literature and thus contributes to the both and related communities; and thus we believe our work will be of interest to many researchers. To further support our claim, we would like to kindly point out that all of the existing NAS benchmark works are also limited to only one task (e.g, image classification) and they have been successful in attracting the attention of many, e.g, NAS-Bench-201 was published as a spotlight at last year's ICLR.\\n\\n\\n**Re. 3: TIMIT is not a public/downloadable corpus, and its access is limited.**\\n\\nWe chose TIMIT mainly due to its popularity, small size, and high quality of phoneme-level transcriptions. In general, we agree that it\\u2019s a better practice to stick to the publicly available datasets, however, in this particular case there are a number of good reasons to use TIMIT:\\n- Compared to CMU, TIMIT is more popular and therefore results contained in the dataset are easier to relate to the existing literature. \\n- Compared to \\u201cmini\\u201d Librispeech TIMIT is a better choice in the context of our transferability experiments since it\\u2019s completely disjoint from the full Librispeech dataset.\\n- It also gives us some insights into transferability of architecture search findings from a phoneme based system to a character/sub-word based system\\n- Its transcriptions are hand verified, and balanced for phonetic and dialectal coverage. \\n- Even though the TIMIT dataset is not open sourced, it is available to be used for non-commercial and academic purposes freely.\\n\\n\\n**Re. 4: The analysis is mainly based on one corpus (TIMIT).**\\n\\nSimilarly to our answer to the first issue raised by the reviewer, we\\u2019d like to point out that our approach is following existing NAS benchmarks and the choice to focus on a single corpus is inline with the existing literature. Even though we understand the reviewer's scepticism about transferability to other datasets, in the paper we only mention that - in the context of our search space, training methodology, etc. - correlation between TIMIT and Librispeech performance is high, which is an important finding potentially opening doors for more NAS research in the ASR domain. In general, we agree that \\u201cmore is better\\u201d in ML research, but we\\u2019d argue that our choices regarding the scope of work are all well-justified considering the state of NAS and ASR research, and the presented findings are sufficient to be of importance to our chosen domain.\\n\\n**Re. 5: No survey about the architecture search efforts in ASR. There is a lot of literature about NAS, evolution algorithm, a black-box search of ASR architectures.**\\n\\nWe admit that there is some NAS-related work for ASR which is currently missing - however, most of it seems to have been published very recently - after or around the time our manuscript was first submitted. We will update the revised paper and point to the relevant prior work (listed below).\\n\\nJ. Kim et al. \\\"Evolved Speech-Transformer: Applying Neural Architecture Search to End-to-End Automatic Speech Recognition.\\\" INTERSPEECH (2020).\\n\\nY. Chen et al. \\\"DARTS-ASR: Differentiable Architecture Search for Multilingual Speech Recognition and Adaptation.\\\" INTERSPEECH (2020).\\n\\nT. Mo et al. \\u201cNeural Architecture Search For Keyword Spotting.\\u201d INTERSPEECH (2020).\\n\\nL. He et al. \\u201cLearned Transferable Architectures Can Surpass Hand-Designed Architectures for Large Scale Speech Recognition.\\u201d arXiv preprint arXiv:2008.11589 (2020).\\n\\nA. Baruwa et al. \\u201cLeveraging end-to-end speech recognition with neural architecture search.\\u201d arXiv preprint arXiv:1912.05946 (2019).\\n\\nT. Veniat et al. \\u201cStochastic adaptive neural architecture search for keyword spotting.\\u201d ICASSP (2019). \\n\\nH. Mazzawi et al. \\u201cImproving keyword spotting and language identification via neural architecture search at scale.\\u201d INTERSPEECH (2019).\\n\\n\\n**Re. 6: The optimization hyper-parameters and input feature configurations should be considered as one of the search configurations.**\\n\\nInclusion of additional hyper-parameters and input configurations in the search space would result in an exponential increase in configurations and would thus be computationally infeasible. To somewhat mitigate this, we decided to instead include results using two different sets of hyper-parameters. Please note that our decision to keep the search space limited to architecture only is aligned with the existing NAS benchmarks (as the problem of feasibility is common), and the decision to include results for two different sets of hyper-parameters is taking a step further to acknowledge the problem (since other benchmarks did not do that).\"}",
"{\"title\": \"NAS benchmark for ASR\", \"review\": \"This paper proposes a new experimental benchmark for ASR based on neural architecture search (NAS). NAS becomes one of the important machine learning/deep learning areas and has been widely studied in image classification and NLP tasks. This paper follows this trend and provides a NAS benchmark for ASR by using TIMIT. The paper also has a number of experiments by changing the neural network architectures and shows a lot of interesting findings. The paper is well written overall.\", \"strengths\": \"1) Providing a NAS platform for ASR\\n2) A lot of analysis in terms of the various architectural configurations and NAS algorithms\\n3) A discussion of applying such methods to the other database (librispeech).\", \"weaknesses\": \"1) The problem is too specific to ASR. It may not gain much attention from general machine learning researchers in ICLR \\n2) Not so much technical or algorithmic novelty, although I appreciate the authors' efforts for this new benchmark.\\n3) TIMIT is not a public/downloadable corpus, and its access is limited. I recommend the authors to try other easily accessible corpora (e.g., CMU an4 or \\\"mini\\\" Librispeech). \\n4) The analysis is mainly based on one corpus (TIMIT), and I'm not very sure about the finding and discussions are applicable to the other database.\\n5) No survey about the architecture search efforts in ASR. There is a lot of literature about NAS, evolution algorithm, a black-box search of ASR architectures.\\n6) The optimization hyper-parameters and input feature configurations should be considered as one of the search configurations. The architectures, input feature configurations, and optimization hyper-parameters are highly correlated.\\n\\nOther comments\\n- Does the TIMIT experiment include a phone bi-gram model? This is a standard experimental setup.\\n- Figure 2 is an interesting finding. The paper says that it is different behavior compared with image classification benchmarks. Please discuss it by referring to the report about the image classification benchmarks.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Review\", \"review\": \"The paper presents a study on neural architectural search on speech recognition. An new approach for NAS using convolutional models on speech is presented. Using the TIMIT dataset, ~8k different architectures are trained and evaluated. A study shows that the findings on TIMIT transfer well to a large-scale dataset, Librispeech.\", \"pros\": [\"The study is clearly novel, As far as I can tell this is the first NAS paper on speech.\", \"The finding on the transferability between TIMIT and Librispeech is significant.\", \"The paper is well motivated and well situated in the literature.\"], \"cons\": [\"The databases selected makes the findings and the models limited to one domain, hence limiting the significance of the paper.\"], \"detailed_comments\": \"- My main concern with the paper is the choice of corpora: TIMIT and Librispeech are commonly used in ASR studies, but they are both composed on only clean, read speech in English. There is no way to know if the TIMIT results also transfer to another type of speech (conversational, acted, etc), to other recording conditions or to another language. Hence, the presented findings and models are only useful to ASR research focusing on this particular domain. I would encourage the authors to plan more studies with varied data as future work.\\n\\nOverall, I think the paper is a very good first step towards more NAS-benchmark studies for speech, hence I vote for acceptance despite its limitation in terms of domain.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting research on NAS in ASR domain\", \"review\": \"Motivation and summary:\\nA lot of research works have been done for NAS-benchmark in the domain of computer vision (and also NLP). This paper introduces a NAS-benchmark dataset for ASR. The authors release a NAS-Bench dataset which would benefit both ASR model architecture search and reproducible NAS research.\\nThis is an interesting and pioneer work in the ASR domain. The NAS-Bench-ASR is built upon TIMIT, a relatively small corpus for speech phonetic recognition. Besides the careful analysis on the designed NAS dataset, the authors also evaluated a few NAS algorithms, and showed that good cell structures identified on the TIMIT dataset aligned with some existing convolutional ASR models in the literature, and can be transferred to Libirspeech. I have below questions/concerns:\", \"regarding_the_design_of_the_macro_architecture\": \"Why the design only uses unidirectional, rather than bidirectional LSTM? Is the paper focusing on on-device deployment?\", \"regarding_the_performance_on_timit\": \"The best performance on TIMIT reported in the paper (PER 18.91 on validation set and 21.05 on test set) has clear gap compared to numbers reported in the literature after year of 2013. Basically, simple stacked bi-directional LSTM CTC recognizers should be able to achieve clearly lower validation/test set PER (on 39 Phones) than numbers reported in the paper.\\nI understand that achieving low PER on TIMIT (and low WER on LibriSpeech) is not the goal of this paper. However, the goal of NAS search is to search for competitive (or even state-of-the-art architectures) for using and future research. While the NAS-BENCH-ASR dataset is built upon TIMIT, it would be more convincing if stronger models with better validation/test PER are identified.\", \"regarding_early_stopping\": \"According to the last paragraph in page 4, it seems that the authors logged the validation loss/PER for each epoch, but only logged the test metrics at the end of the training. So, the test performance is not coming from the best epoch, this might have some effect to the validation/test performance correlation study.\", \"regarding_transferability\": \"The authors claimed the transferability of the identified cell structure. The evidence is the correlation between Librispeech WER and TIMIT PER. To better support this claim, the authors may consider conduct study on some other corpus like SWITCHBOARD.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"A sound paper with somewhat discouraging results\", \"review\": \"This paper studies neural architecture search for automatic speech recognition. The approach is to first search over small, reusable networks, called cells, and then applies the cells to a template network. The cells are learned with phonetic recognition on TIMIT and validated on letter recognition on LibriSpeech.\\n\\nThe approach strikes a good balance between having a large search space and the computation cost of the search. I will discuss a few weaknesses in detail, but these weaknesses won't be known prior to performing the experiments in the paper.\\n\\nThe presentation of the paper is also done well. I have no trouble following the paper from start to finish.\\n\\nThe weakness of the paper is the absolute PERs on TIMIT and WERs LibriSpeech. The best PER achieved in this paper, 21.1% is quite high in today's standard. In (Graves et al., 2013), the numbers are around 18%. The best WERs achieved in Figure 7 are high in the teens. In (Hsu et al., 2020), the number for training on the 100 hours of LibriSpeech is around 14%.\\n\\nIt is unclear if the paper uses any regularizer at all when training the models. Even adding some amount of dropout would help the final numbers.\\n\\nThe discrepancy between the numbers in the paper and others makes me wonder the search over the cells is a wrong direction to begin with. Maybe it is the things held fixed that play the role of achieving the best numbers. For example, the macro architecture is held fixed, the optimizer is held fixed, the learning rate schedules are more or less fixed. The macro architecture might play a critical role here. Typically, the competitive architectures require many layers of LSTMs instead of one used in the paper. It is quite discouraging that the models, discovered by NAS after spending so much compute, are not competitive to baseline models reported in other papers.\\n\\n\\nHybrid Speech Recognition with Deep Bidirectional LSTM\\nAlex Graves, Navdeep Jaitly, and Abdel-rahman Mohamed\\nASRU, 2013\\n\\nSemi-Supervised Speech Recognition via Local Prior Matching\\nWei-Ning Hsu, Ann Lee, Gabriel Synnaeve, and Awni Hannun\", \"arxiv\": \"2002.10336\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"results not close to sota\", \"review\": \"The authors contribute to the NAS literature by presenting a framework that works decently well on small ASR tasks, specifically TIMIT. They make judicious decisions regard the macro and micro cells that are then swept over. They also show that there is some correlation between training for TIMIT and tasks that have more data, such as librispeech. The experiments look to have been done carefully.\\n\\nMy chief issue with the work is how far the results are from sota on any of the tasks. Their NAS search for TIMIT only yields PER of 21.93 on test, 19.55 on val. wav2vec 2.0 [1] gets 8.3 test, 7.4 val. Authors may argue the wav2vec results are pre-trained, and I would argue that the authors should also do that. However, [2] gets 13.8% on timit test with training being from scratch. Similarly, their best librispeech wer is 19, which is _very far_ from sota from the same paper. Even their transfer correlations between TIMIT and LibriSpeech are not very high.\\n\\nIt is challenging to evaluate their results when they are so far from sota. Authors should resubmit their paper with updated results.\\n\\n[1] Baevski, et al. wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations. https://arxiv.org/pdf/2006.11477.pdf\\n\\n[2] Ravanelli, et al. THE PYTORCH-KALDI SPEECH RECOGNITION TOOLKIT. https://arxiv.org/pdf/1811.07453v2.pdf\\n\\n\\n========================================================================\\n\\nI thank the authors for their detailed rebuttal. However, their accuracies are still very below sota. Therefore, I am inclined to stick to my original review and rating.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
98ntbCuqf4i | MQES: Max-Q Entropy Search for Efficient Exploration in Continuous Reinforcement Learning | [
"Jinyi Liu",
"Zhi Wang",
"Jianye HAO",
"YAN ZHENG"
] | The principle of optimism in the face of (aleatoric and epistemic) uncertainty has been utilized to design efficient exploration strategies for Reinforcement Learning (RL). Different from most prior work targeting at discrete action space, we propose a generally information-theoretic exploration principle called Max-Q Entropy Search (MQES) for continuous RL algorithms.
MQES formulates the exploration policy to maximize the information about the globally optimal distribution of $Q$ function, which could explore optimistically and avoid over-exploration by recognizing the epistemic and aleatoric uncertainty, respectively. To make MQES practically tractable, we firstly incorporate distributional and ensemble $Q$ function approximations to MQES, which could formulate the epistemic and aleatoric uncertainty accordingly. Then, we introduce a constraint to stabilize the training and solve the constrained MQES problem to derive the exploration policy in closed form. Empirical evaluations show that MQES outperforms state-of-the-art algorithms on Mujoco environments. | [
"mqes",
"entropy search",
"epistemic",
"efficient exploration",
"continuous reinforcement",
"exploration policy",
"aleatoric uncertainty",
"principle",
"optimism"
] | Reject | https://openreview.net/pdf?id=98ntbCuqf4i | https://openreview.net/forum?id=98ntbCuqf4i | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"APqpRYxOLP8",
"7O-hHfpeJos",
"7z2e34yFC95",
"TRAckQEI_ct",
"jaVNxsrorAk",
"Xe6HPoLQhCJ",
"cUmkYrEJGQm",
"-lERuumpCP7",
"GpNUv95AZZa",
"2lWYm0iIR7U",
"OVog7hbw0BY",
"6XZfa778gXU",
"umHwKNCpb_1",
"r-FE_4YDv7n"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040450256,
1606035202224,
1606035104019,
1606034995829,
1606034557109,
1606034303243,
1606034165142,
1606033883952,
1606033595653,
1604748374403,
1604015123814,
1603902208435,
1603896865502,
1603791327403
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3526/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3526/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3526/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3526/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3526/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3526/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3526/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3526/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3526/AnonReviewer5"
],
[
"ICLR.cc/2021/Conference/Paper3526/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3526/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3526/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3526/AnonReviewer4"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"The paper contributes to the community by introducing an approximation to distribution Q functions, based on the epistemic and aleatoric uncertainty. The reviewers believe the ideas make sense. However the presentation and its experiment results make it hard for them to understand some important details. For example, the reviewers are confused about why the empirical results show the proposed methods are better.\\n\\nThe majority of the reviewers are negative about the paper. After rebuttal, the reviewers are not convinced. Based on this, the meta-reviewer recommends rejection. Authors can strengthen paper by improving its presentation and addressing the concerns from the reviewers.\"}",
"{\"title\": \"\\u3010Continued\\u3011Response\", \"comment\": \"\\u3010Q4\\uff1aThe empirical evaluation is done only against rather limited baselines (basically random exploration baselines). I would encourage the authors to compare their method to other forms of exploration. Particularly relevant to this paper is the work by Houthooft et al. 2016 (VIME) which also uses information-theoretic objective for exploration in continuous problems. The authors should at least cite this paper in their related work section.\\u3011\\n\\nActually, VIME is different from our method. In VIME, the environment dynamics is estimated with stochastic parameters, and derive intrinsic reward sequentially. It is more like model-based method. However, in MQES, we model the uncertainty by using distributional and ensemble Q function approximations. So, MQES is a model-free method. \\n\\n\\u3010Q5\\uff1aFollowing the last two points, a great improvement could be if other than just demonstrating performance in terms of reward, the authors would evaluate the exploratory behavior itself of an agent trained with their method, in some simple environment (i.e in terms of novel states visited / distance traveled / etc.)\\u3011\\n\\nDemonstrating the insights of MQES in the simple environments is definitely a good way to improve the quality of our method. But due to the time limit of the rebuttal, we will put it as future work.\\n\\n\\u3010Q6\\uff1aIn the abstract: the use of \\\"optimism in the face of uncertainty\\\" is definitely not a \\\"recently\\\"\\u3011\\n\\nWe have removed \\\"recently\\\" in the revised version.\\n\\n\\u3010Q7\\uff1a\\\"The above methods are not efficient\\\" (3rd paragraph, Introduction): This is not accurate. UCB (and other methods) are provably efficient for several problems/assumptions.\\u3011\\n\\nPlease see the first part of this sentence, i.e., \\\"However, since the aleatoric uncertainty in the RL systems are heteroscedastic, ...\\\". We point out that UCB could be not efficient when the aleatoric uncertain\\n\\n\\u3010Q8\\uff1a Some relevant literature is missing from the related work. Most notable is the VIME paper mentioned earlier (Houthooft et al. NeurIPS 2016) and the Fox et al. 2018 ICLR paper (DORA The explorer) which combines counter-based like exploration with an optimism principle for high-dim MDPs.\\n\\u3011\\n\\nSince VIME and DORA are all intrinsic motivated exploration method, We refer the in the first paragraph in the Sec. 2.\\n\\n\\u3010Q9\\uff1aSection 3.2: $T^\\\\pi$ is not the \\\"Bellman optimiality operator\\\" but rather the bellman operator for policy $\\\\pi$.\\u3011\\n\\nWe have corrected \\\"Bellman optimality operator\\\" as \\\"Bellman operator\\\" in the Section 3.\"}",
"{\"title\": \"Response\", \"comment\": \"\\u3010Q1\\uff1a There is a use of terminology which might not be clear or known for the general RL/exploration audience (\\\"acquisition functions\\\", \\\"heteroscedastic aleatoric uncertainty\\\"). The entire discussion of the two types of uncertainty seems somewhat disconnected from the method itself. A short explanation of why and how the two types of uncertainty are important for the exploration problem would be very helpful.\\u3011\\n\\nThe introduction of \\\"acquisition functions\\\" is not related our main point of paper, hence we remove it in the revised version. Moreover, we describe the \\u201dheteroscedastic aleatoric uncertainty\\\" more detailed in the third paragraph in Sec. 1 \\n\\nWe add a short explanation of the benefits of uncertainty for RL exploration in the second paragraph at Sec. 1. \\n\\n\\u3010Q2\\uff1a There are some notation obscurities or inaccuracies. This is most notable in Section 4.1, which is unfortunate since this is where the key ideas of the approach are discussed; Equation 8, which is central to what follows, is rather confusing. The text mentions that $\\\\pi_E$ selects action $a_t$ that (...)\\\", but the equations then seems to define an entire policy. And the optimization problem (in the same Eq.) is in itself dependent on $a_t$, so it's not even clear one gets a valid policy/distribution from something like $\\\\pi_E(a_t|s_t) = \\\\arg\\\\max_{\\\\pi} {\\\\bf{F}} ^\\\\pi(s_t, a_t)$; Following the previous point, Eq. 9 is also confusing. It's not clear to me what the authors mean by measuring MI between $Z^*$ and the pair $(Z^\\\\pi,\\\\pi)$ ? It's also not clear what is the meaning of the \\\"posterior distribution\\\" denoted by $p$ , and why the mutual information is measured for the $Z$ parameters (return probs) but then re-expressed as the difference in entropies for the policies (action probs).\\u3011\\n\\nWe admit that the theory presented in Sec. 4.1 is vague. In the revised version, we rewrite the equations to make them more accurate: \\n\\nFor equation 8, the mutual information is conditioned on the state, Hence, to make it more rigorous, equation 8 is rewritten as: $\\\\pi_E = \\\\arg\\\\max_{\\\\pi\\\\in \\\\Pi} {\\\\bf{F}} ^\\\\pi(s_t).$\\n \\nFor MQES, we consider actions as random variables and policies are the distributions that the actions follow. Hence, given state, the conditional mutual information is actually between the exploration action random variable $A_E\\\\sim\\\\pi_E$ and random variable of globally optimal $Q$ function $Z^*(s,a^*)$ in Eq. 9, which are with value $a\\\\in\\\\bf{A}$ and $z^*(s,a^*)$, respectively. And the posterior probability $p(a|z^*(s,a^*),s)$ describes the distribution of exploration action conditioning on the state and globally optimal $Q$ function $z^*(s,a^*)$. Hence, Eq. 9 is rewritten as: $ {\\\\bf{F}}^\\\\pi(s_t) = {\\\\bf{MI}}(Z^*(s,a^*),A|s = s_t) ={\\\\bf{H}}\\\\left[ \\\\pi(a_t|s_t)\\\\right]-{\\\\bf{H}}\\\\left[ p(a_t|z^*(s_t,a^*), s_t)\\\\right].$\\n\\n \\u3010Q3\\uff1aSince exploration here is encouraged by choosing informative actions about Q*, it's not clear that this method will be helpful in very sparse-reward settings (which are a central motivation for sophisticated exploration techniques). Put differently, relying on Q* to guide exploration ultimately couples exploration to the external reward, which seems rather undesirable to me. The method might be helpful from other perspectives (optimization, controlling the level of \\\"over-exploration\\\" etc), but it's not clear that it is helpful as an exploration method per se.\\u3011\\n\\nActually, epistemic uncertainty introduced by the estimation of $Q^*$ could offer extra information for exploration when external reward is sparse. If the states that are seldom visited, the epistemic uncertainty at those states will be relatively large and the exploration should be encouraged. Hence, it is expected to perform better at the sparse environments, and we also conduct the experiments at sparse mujoco environments to show the improvements. \\n\\nHowever, if we only formulate the uncertainty using ensemble critics, the formulated uncertainty is the mixture of the aleatoric and epistemic uncertainty, where the aleatoric uncertainty is caused by the randomness of the environment and cannot be eliminated. Hence, if we do not distinguish these two uncertainties and formulate them separately, we may explore the states visited frequently but with high randomness, i.e., low epistemic uncertainty and high aleatoric uncertainty, which is undesirable.\"}",
"{\"title\": \"Response\", \"comment\": \"\\u3010Q1\\uff1aI felt that the paper isn't well written and discusses a lot of different concepts in a haphazard manner. There are a lot of equations and symbols in the text without proper explanation and context which make it difficult to gather the main contribution. The language used gets vague in many statements made in the paper\\nFor ex. \\\"Proposition 1. Generally, the posterior probability is as follows\\\".\\u3011\\n\\nWe admit that we bring a few concepts that are not generally known to RL researchers. To make the background easy to follow, we firstly explain the motivation of epistemic and aleatoric uncertainty encouraging exploration in Sec. 1; Then, we remove the introduction of acquisition function, which is not our main point.\\n\\nFor the equations, we rewrite Sec. 4.1 to make the derivation of MQES more rigorous, which is the main theoretical contribution of our paper; Then, we check all the symbols and try to clarify them.\\n\\n\\t\\nFor the language, we have polished the paper.\\n\\n\\u3010Q2\\uff1aA lot of algorithm adaptations are proposed without actually carrying out ablations which make it difficult to discern if the proposed MI maximization is indeed responsible for performance. For ex. \\\"Since the target for critic in the advanced algorithms, like SAC and TD3, is usually estimated pessimistically..\\\". The authors should actually present ablations to support if a pessimistic estimate is indeed required for their adaptation for these methods.\\u3011\\n\\nWe admit that we need more ablation experiments. In the revised version, we added Sec. 5.4 to conduct ablation experiments, regarding to sesitivity to the hyper-parameters and the gain of distinguishing two types of uncertainty. \\n\\nHowever, it is worth noting that we can utilize other methods to formulate $Z^{\\\\pi_E}$, like mean estimation, i.e., $\\\\mathbb{E} \\\\left[Z^{\\\\pi_E}\\\\right]=\\\\mu_Z (s, a;\\\\theta)$ and $z_i^{\\\\pi_E}(s,a;\\\\theta) = \\\\mathbb{E}_{k = {1, 2}} \\\\left[z_i(s, a; \\\\theta_k)\\\\right]$. But, it only affects the choice of hyper-parameter $\\\\beta$ and do not affect the final performance. \\n\\n\\u3010Q3\\uff1a Why haven't the authors included OAC as a baseline given that it outperforms SAC in several tasks? Further the results show little difference in performance in comparison with DSAC on the Mujoco tasks, given that only 5 seeds were used in evaluation, it brings the significance of the results under question. The authors should provide appropriate measures like P-values to support the experiments.\\u3011\\n\\nWe compare with OAC in Sec. 5.4.1, whose performance is similiar with DSAC, since it cannot avoid the effects of aleatoric uncertainty. Besides, we add more experiments on sparse Mujoco tasks and the results are discussed more detailed in Sec. 5. In standard Mujoco, MQES does perform slightly better than DSAC in those easy tasks such as Hopper-v2 and Walker2D-v2. However, in those hard and sparse-reward tasks, MQES performance significantly better than DSAC, and MQES\\\\_Q demonstrates the advantages of stability than MQES\\\\_G.\"}",
"{\"title\": \"Response\", \"comment\": \"\\u3010Q1: However, MQES doesn't show significant improvement over DSAC. For example, Except Sparse-HalfCheetah-v2, $MQES_Q$ has almost same performance as DSAC. Except Sparse-HalfCheetah-v2 and Ant-v2, $MQES_G$ has almost the same performance as DSAC. \\u3011\\n\\nAs shown in Figure 1, both MQES\\\\_G and MQES\\\\_Q show better performance than DSAC in the difficult tasks, and in other easy tasks as shown in Appendix F, MQES can also show slightly better than DSAC. It seems that exploration is not the main bottleneck in the easy tasks, i.e., standard Hopper-v2 and Walker2D-v2. However, in those harder or sparse reward tasks, MQES performs significantly better than DSAC, and MQES\\\\_Q demonstrates the advantages of stability than MQES\\\\_G. \\n\\n\\u3010Q2: Another question is: The horizon is cut to 100 while most papers and OpenAI Gym use 1000 by default. Why do the authors choose 100? How does MQES perform with longer horizon, like 1000? \\u3011\\n\\nOne consideration for our shortened episode length is training efficiency. Also, if the environment, sampling and training settings are the same as baseline, such comparison is fair, so we believe that the horizon doesn't matter as long as it isn't extremely outrageous.\\n\\n\\u3010Q3: The paper shows that exploring using both aleatoric and epistemic uncertainty can improve the performance. What if we consider only one of them, e.g., using only aleatoric uncertainty? I'd like to see this as ablation.\\u3011\\n\\nIn Sec 5.4.1, we provide an ablation study to show the performance gain brought by distinguishing the two types of uncertainty. \\n\\n\\u3010Q4: Table 1: Could you please highlight all algorithms within 1 std to the best?\\u3011\\n\\nIn the revised version, we highlight the mean to the best, which is more appropriate\\n\\n\\u3010Q5: Sparse-HalfCheetah-v2: Could you please provide more details about the environment?\\u3011\\n\\nWe have added experiments on more sparse tasks, which is shown in Sec. 5.3. We describe detailed settings for sparse reward and show that MQES can show stable and consistent advantage over DSAC in those sparse tasks. Briefly, standard Mujoco gives precise reward each step, while in sparse setting, the reward is given only when the agent moves through the threshold (see section 5.3 for more details.).\\n\\n\\u3010Q6\\uff1a What does the mutual information between $(Z^*,\\\\pi^*)$ and $(Z^{\\\\pi_E},\\\\pi_E)$ mean? Are $\\\\pi^*$ and $\\\\pi_E$ random variables? Moreover, in deterministic environments (as in Mujoco environments), $Z^*$ is also deterministic, so is $\\\\pi^*$; What is $p$? The first input of MI is simply a while the second is a pair . Please elaborate.\\u3011\\n\\nIn the revised version, we have rewritten theoretical part, i.e., Section 4.1. Specifically, for MQES, we consider actions as random variables and policies are the distributions that the actions follow. Hence, given state, the conditional mutual information is actually between the exploration action random variable $A_E\\\\sim\\\\pi_E$ and random variable of globally optimal $Q$ function $Z^*(s,a^*)$ in Eq. 9, which are with value $a\\\\in\\\\bf{A}$ and $z^*(s,a^*)$, respectively. And the posterior probability $p(a|z^*(s,a^*),s)$ describes the distribution of exploration action conditioning on the state and globally optimal $Q$ function $z^*(s,a^*)$. Hence, Eq. 9 is rewritten as: $ {\\\\bf{F}}^\\\\pi(s_t) = {\\\\bf{MI}}(Z^*(s,a^*),A|s = s_t) ={\\\\bf{H}}\\\\left[ \\\\pi(a_t|s_t)\\\\right]-{\\\\bf{H}}\\\\left[ p(a_t|z^*(s_t,a^*), s_t)\\\\right].$\\n\\n\\u3010Q7\\uff1a What does the mutual information between $(Z^*,\\\\pi^*)$ and $(Z^{\\\\pi_E},\\\\pi_E)$ mean? Are $\\\\pi^*$ and $\\\\pi_E$ random variables? Moreover, in deterministic environments (as in Mujoco environments), $Z^*$ is also deterministic, so is $\\\\pi^*$.\\u3011\\n\\nFor Eq. 8, the mutual information is conditioned on the state, which is the expectation of the exploration policy. Hence, to make it more rigorous, Eq. 8 is corrected as $\\\\pi_E = \\\\arg\\\\max_{\\\\pi\\\\in \\\\Pi} {\\\\bf{F}} ^\\\\pi(s_t).$\\n\\n\\u3010Q8\\uff1aTo measure the intractable distribution of $Z^*$ during training, we use the $\\\\hat{Z}^*$ for approximation Please rephrase it and say that $\\\\hat{Z}^*$ will be defined later\\uff1b This is not an unbiased estimation and also please clarify K\\u3011\\n \\t\\nWe have corrected the corresponding part.\"}",
"{\"title\": \"Response\", \"comment\": \"\\u3010Q1: authors should give more details about the target policy introduced at Section 4.3. Actually, the reader should check Algorithm 2 at Appendix in order to understand its purpose and how the target policy is updated. I think that it would be better Algorithm 2 to be moved in the main paper if it is possible\\u3011\\n\\nWe agree that it is definitely better to move Algorithm 2 to the main paper However, due to the paper length limit, maybe we could not move it now\\u3002\\n\\n\\u3010Q2: Another point that should be discussed more clearly is the impact of the 3 hyper-parameters ($\\\\alpha$, $\\\\beta$, and $C$) on the performance of MQES. To be more specific, why did you set the uncertainty ratio equal to 1.6?\\u3011\\n\\t\\nTo clarify how the hpyer-parameters affect the performance, we conduct ablation study on hyper-parameters and results are shown in Sec. 5.4.2 and appendix F, and discuss the impact of the hyper-parameters on the performance accordingly. \\n\\n\\u3010Q3: Finally, the empirical results are not discussed at all. It seems for example that the performance of $MQES_G$ is more stable compared to that of $MQES_Q$. Moreover, the performance of DSAC is almost equal (or better) to that of $MQES_Q$. All these points should be explained or discussed by the authors.\\u3011\\n\\nIn the standard mujoco tasks and the reward is dense, MQES\\\\_G is more stable compared to that of MQES\\\\_Q in the easy tasks, e.g., Hopper-v2 and Walker-v2, but in the difficult tasks such as Ant-v2 and the sparse reward mujoco tasks (the tasks shown in Sec 5.3), MQES\\\\_Q is shown to be more stable. It is mainly because the policy follows Gaussian distribution, which renders the value function random variable more Gaussian in the easy tasks due to the relatively low action and state space dimension. Hence, in the easy tasks, the Gaussian formulation incorporates this prior into the learning. Hence, the MQES\\\\_G should be more stable than MQES\\\\_Q in the easy tasks.\\n\\nHowever, in the difficult tasks, which are with relatively higher state and action space or sparse (or un-smooth) reward, the Gaussian assumptions will not hold, and the quantile formulation should be more reasonably since it can represent distribution in the more flexible way.\\\\par\\n\\nAs shown in Figure 1, both MQES\\\\_G and MQES\\\\_Q show better performance than DSAC in the difficult tasks, and in other easy tasks as shown in Appendix F, MQES can also show slightly better than DSAC. It seems that exploration is not the main bottleneck in the easy tasks, i.e., standard Hopper-v2 and Walker2D-v2. However, in those harder and sparse reward tasks, MQES performs significantly better than DSAC, and MQES\\\\_Q demonstrates the advantages of stability than MQES\\\\_G.\"}",
"{\"title\": \"\\u3010Continued\\u3011Response\", \"comment\": \"\\u3010Q8: is $n$ used in Proposition 2?\\u3011\\n\\nIn proposition 2, the length of vector $m$ is the action dimension $n$. \\n\\n\\u3010Q9: the conventions of CDF in Proposition 2 and in Eq.21 should be made consistent with its first definition in Eq.11\\u3011 \\n\\nWe unify the conventions of CDF in the revised version.\\n\\n\\u3010Q10: definition of $G(\\\\cdot)$ is not used after its first definition.\\u3011 \\n\\nIn the revised version, we use $G(\\\\cdot)$ in the Proposition 2.\\n\\n\\u3010Q11: why the horizon is set to 100, instead of 1000 environment steps like in the SAC paper?\\u3011\\n\\nOne consideration for our shortened episode length is training efficiency. Also, if the environment, sampling and training settings are the same as baseline, such comparison is fair, so we believe that the horizon doesn't matter as long as it isn't extremely outrageous.\"}",
"{\"title\": \"Response\", \"comment\": \"\\u3010Q1: Although the idea is interesting, it's not yet clear how the proposed method can be used as an additional module on top of other entropy-regularized off-policy approaches like SAC or TD3, etc. The paper can benefit more if it can be formulated in such a more general way.\\u3011\\n\\nActually, MQES is indeed a generally exploration framework for off-policy actor-critic algorithms. Nevertheless, as an example, we incorporate MQES-based exploration into SAC in Section 4.2. Furthermore, if the policy is deterministic (like TD3), the exploration policy is just a special case of Eq. 13, which is without covariance matrix.\\n\\n\\u3010Q2: The practical implementation seems to make sense. However, it's still unclear to me how policies $\\\\pi_E$ and $\\\\pi_T$ are parameterized. Although a sketch of the main idea is described in Algorithm 1, it's not clear to me each term is parameterized and computed based on a particular parameterization.\\u3011\\n\\nIn Algorithm 1, we show that the target policy $\\\\pi_T$ is parameterized by $\\\\phi$. And we use Eq. 13 to derive $\\\\pi_E$, which introduces no extra parameters. \\n\\n\\u3010Q3: An updated policy as a solution of (18) gives an update on the mean but keeps the covariance unchanged. I was wondering then how this policy can adaptively change its exploration through the progress of learning?\\u3011\\n\\nIn the proof of Proposition 2, we prove that the covariance matrices of target and exploration policy are equal. Please check Appendix B for more details.\\n\\n\\u3010Q4: The experiment results are quite preliminary. There are no experiment settings. There are a number of hyper-parameters of MQES that might affect overall performance of MQES, e.g. N, $\\\\beta$ etc. but not discussed and ablated? The comparisons might also take into account other distributional policy search methods\\u3011\\n\\nIn the revised paper, we explain the experiment settings in more detail in Sec. 5.1. \\n\\nAlso, we conduct ablation study on hyper-parameters and results are shown in Sec. 5.4.2 and appendix F, and discuss the impacts of the hyper-parameters on the performance. \\n\\nFor the benchmarks, as mentioned in Sec. 5.1, DSAC has compared with TD4 (the distributional version of TD3), and outperformed TD4 in most of tasks. So there's no need comparing with TD4, and we choose to implement our MQES based on DSAC and also compared with it, which obtains good results.\\n\\n\\u3010Q5: Eq. 4 and 5, 6: min and log instead of arg functions? \\u3011\\n\\nSorry for our negligence, and we have corrected the typo.\\n\\n\\u3010Q6: Eq. 9: What is the difference between posterior $p(a|Z^*(s,\\\\pi^*)$ and $\\\\pi^*$? Is the posterior not the optimal policy since $Z^*$ is the optimal distributional value function estimate? A detailed derivation for Eq.9 is expected; Definitions for terms in 4.1: $p(a|Z^*(s,\\\\pi^*)$ vs $p(a|Z^*(s,\\\\pi) p(a|Z^*(s,\\\\pi^*) p(a|Z^*(s,\\\\pi)$ etc; Some theoretical steps are not clearly justified of why.\\u3011\\n\\nWe admit that the theory presented in Sec. 4.1 is vague. In the revised version, we rewrite the equations in Sec. 4.1 to make them more accurate: \\n\\n 1. For equation 8, the mutual information is actually conditioned on the state. Hence, to make it more rigorous, equation 8 is rewritten as: $\\\\pi_E = \\\\arg\\\\max_{\\\\pi\\\\in \\\\Pi} {\\\\bf{F}} ^\\\\pi(s_t).$\\n \\n2. For MQES, we consider actions as random variables and policies are the distributions that the actions follow. Hence, given state, the conditional mutual information is actually between the exploration action random variable $A_E\\\\sim\\\\pi_E$ and random variable of globally optimal $Q$ function $Z^*(s,a^*)$ in Eq. 9, which are with value $a_E\\\\in\\\\bf{A}$ and $z^*(s,a^*)$, respectively. And the posterior probability $p(a|z^*(s,a^*),s)$ describes the distribution of exploration action conditioning on the state $s$ and value $z^*(s,a^*)$ of globally optimal $Q$ function random variable $Z^*(s,a^*)$. Hence, Eq. 9 is rewritten as: $ {\\\\bf{F}}^\\\\pi(s_t) = {\\\\bf{MI}}(Z^*(s,a^*),A|s = s_t) ={\\\\bf{H}}\\\\left[ \\\\pi(a_t|s_t)\\\\right]-{\\\\bf{H}}\\\\left[ p(a_t|z^*(s_t,a^*), s_t)\\\\right].$\\n\\n\\n\\u3010Q7: Should the aleatoric uncertainty be the variance of the ${min_{\\\\theta_k} z_i(\\\\theta_k)}$ instead of en expectation over $\\\\theta_k$. Because $z_i$ is estimated as the min of two estimates, e.g. in Eq.4\\u3011\\n\\nThe aleatoric uncertainty is a property of the environment itself, whereas $Z_{\\\\pi}$ is the result of our pessimistic estimation, and the two are not directly related. At the same time, we need to estimate the aleatoric uncertainty of the environment realistically in order to accurately avoid it in our explorations, and it makes no sense to estimate aleatoric uncertainty.\"}",
"{\"title\": \"Thanks to all the reviewers. Summary of changes and paper revision.\", \"comment\": \"We greatly appreciate the reviews you provided on our paper. We are very pleased to get the valuable comments and excellent suggestions for further improving our work. Revisions have been made in the paper accordingly. The revisions are summarized as follows:\\n\\n1. For the theoretical part, i.e., Sec. 4.1, we rewrite the Eq. 8, 9 and 10 to \\n make them more rigorous and show our theoretical contribution more clearly.\\n\\n\\n2. For the results part, to make our results moreconvincing, we conduct experiments in the sparse Mujoco environments, and describes more details of the setting. Also, we have the ablation study to show sensitivity of hyper-parameters and gain of distinguishing two types uncertainty.\\n\\n\\n3. Expect the theoretical and results part, to make our paper easier to follow, we polish our paper from the perspective of paper structure, symbol clarifications and language, according to the comments\\n\\n\\nA point-by-point comment-response section is given next. Please note that all the equations and references are referred to those in the revised version of the manuscript unless otherwise indicated. Main changes in the revised manuscript are highlighted in red for the ease of cross references. Hope our responses would clarify your concerns. We are looking forward to your future comments if any.\"}",
"{\"title\": \"Interesting idea but unconvincing results.\", \"review\": [\"This paper proposes MQES, a Max-Q entropy search for policy optimization in continuous RL. The authors propose to combine advantages of the information-theoretic principle and distributional RL, in which epistemic and aleatoric uncertainty are estimated using similar entropy-search acquisition functions in the Bayesian Optimization (BO). As said, this is a new method to introduce a more efficient exploration strategy. As a result, policy improvement is formulated as a constraint optimization problem where a next exploration policy can be solved in a closed-form. The proposed method is evaluated on Mujoco tasks and compared against other off-policy approaches, SAC, and DSAC. The results show MQES outperforms other methods in domains where exploration is needed.\", \"The main contribution of the paper is to introduce an approximation to distribution Q functions that are based on the epistemic and aleatoric uncertainty. The main objective is based on the mutual information maximization as described in 4.1. A practical implementation is proposed in 4.1. The idea makes sense however the presentation and its experiment results make it hard to understand some important details. Some of my major comments are as follows.\", \"1. Although the idea is interesting, it's not yet clear how the proposed method can be used as an additional module on top of other entropy-regularized off-policy approaches like SAC or TD3, etc. The paper can benefit more if it can be formulated in such a more general way.\", \"2. The practical implementation seems to make sense. However, it's still unclear to me how policies \\\\pi_E and \\\\pi_T are parameterized. Although a sketch of the main idea is described in Algorithm 1, it's not clear to me each term is parameterized and computed based on a particular parameterization.\", \"3. An updated policy as a solution of (18) gives an update on the mean but keeps the covariance unchanged. I was wondering then how this policy can adaptively change its exploration through the progress of learning?\", \"4. The experiment results are quite preliminary. There are no experiment settings. There are a number of hyperparameters of MQES that might affect overall performance of MQES, e.g. N, \\\\beta etc. but not discussed and ablated? The comparisons might also take into account other distributional policy search methods.\", \"5. And some minor comments\", \"Eq. 4 and 5, 6: min and log instead of arg functions?\", \"Eq. 9: What is the difference between posterior p(a|Z^*(s,\\\\pi^*) and \\\\pi^*? Is the posterior not the optimal policy since Z^* is the optimal distributional value function estimate? A detailed derivation for Eq.9 is expected.\", \"Definitions for terms in 4.1: p(a|Z^*(s,\\\\pi^*) vs p(a|Z^*(s,\\\\pi) p(a|Z^*(s,\\\\pi^*) p(a|Z^*(s,\\\\pi) etc.\", \"in Eq.14: Should the aleatoric uncertainty be the variance of the {min_{\\\\theta_k} z_i(\\\\theta_k)} instead of en expectation over \\\\theta_k. Because z_i is estimated as the min of two estimates, e.g. in Eq.4\", \"Some theoretical steps are not clearly justified of why.\", \"is $n$ used in Proposition 2?\", \"the conventions of CDF in Proposition 2 and in Eq.21 should be made consistent with its first definition in Eq.11\", \"definition of G() is not used after its first definition.\", \"why the horizon is set to 100, instead of 1000 environment steps like in the SAC paper?\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"MaxQ Entropy Search Blind Review #1\", \"review\": \"This work introduces max-Q Entropy Search (MQES) exploration principle for continuous RL algorithms. MQES addresses the exploration-exploitation dilemma that constitutes a fundamental RL problem. Actually, MQES defines an exploration policy able to explore optimistically and avoid over-exploration. One of the main advantages of MQES is its ability to recognise the epistemic and aleatoric uncertainty. Empirical analysis has been conducted on Mujoco, showing that the performance of MQES is comparable to those of other state-of-the-art algorithms.\\n\\nIn general the paper is well written and can be easily followed by the reader. Nevertheless some parts of the MQES should be explained in more detail. For instance, authors should give more details about the target policy introduced at Section 4.3. Actually, the reader should check Algorithm 2 at Appendix in order to understand its purpose and how the target policy is updated. I think that it would be better Algorithm 2 to be moved in the main paper if it is possible. Another point that should be discussed more clearly is the impact of the 3 hyper-parameters (\\\\alpha, \\\\beta, and C) on the performance of MQES. To be more specific, why did you set the uncertainty ratio equal to 1.6? Finally, the empirical results are not discussed at all. It seems for example that the performance of MQES_G is more stable compared to that of MQES_Q. Moreover, the performance of DSAC is almost equal (or better) to that of MQES_Q. All these points should be explained or discussed by the authors.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Review\", \"review\": \"---\\nSummary\\n\\nThis paper studies the problem of efficient exploration in continuous environment. It proposes a novel algorithm, Max-Q Entropy Search (MQES), and utilizes Distributional SAC (DSAC) to formulate the uncertainties. Experiments show that the proposed algorithm, MQES, outperforms the baselines (SAC, DSAC). \\n\\n---\\nComments\\n\\nHowever, MQES doesn't show significant improvement over DSAC. For example, Except Sparse-HalfCheetah-v2, MQES_Q has almost same performance as DSAC. Except Sparse-HalfCheetah-v2 and Ant-v2, MQES_G has almost the same performance as DSAC.\", \"another_question_is\": \"The horizon is cut to 100 while most papers and OpenAI Gym use 1000 by default. Why do the authors choose 100? How does MQES perform with longer horizon, like 1000?\\n\\nThe paper shows that exploring using both aleatoric and epistemic uncertainty can improve the performance. What if we consider only one of them, e.g., using only aleatoric uncertainty? I'd like to see this as ablation. \\n\\n\\n---\\nWriting Quality\\n\\nThe writing can also be improved.\", \"table_1\": \"Could you please highlight all algorithms within 1 std to the best?\", \"sparse_halfcheetah_v2\": \"Could you please provide more details about the environment?\\n\\nWhat does the mutual information between $(Z^*, \\\\pi^*)$ and $(Z^{\\\\pi_E}, \\\\pi_E)$ mean? Are $\\\\pi^*$ and $\\\\pi_E$ random variables? Moreover, in deterministic environments (as in Mujoco environments), $\\\\pi^*$ is also deterministic, so is $Z^*(s_t, a_t)$.\", \"eq_8\": \"LHS is a scalar (probablity), while RHS is a policy. Please clarify notations to avoid confusion.\", \"eq_9\": \"What is $p$? The first input of MI is simply a $Z^*$ while the second is a pair $(Z^\\\\pi, \\\\pi(a_t | s_t))$. Please elaborate.\\n\\n> To measure the intractable distribution of $Z^*$ during training, we use the $\\\\hat Z^*$ for approximation \\n\\nPlease rephrase it and say that $\\\\hat Z^*$ will be defined later.\", \"eq_20\": \"This is not an unbiased estimation, as $\\\\mathbb{E}[X^{-1}] \\\\neq \\\\mathbb{E}[X]^{-1}$. Also please clarify K.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Are the empirical results significant enough to support the proposed exploration heuristic?\", \"review\": \"The paper proposes an exploration scheme for RL in continuous action spaces using the principle of information maximization for globally optimal Q distribution.\\n\\n1. I felt that the paper isn't well written and discusses a lot of different concepts in a haphazard manner. There are a lot of equations and symbols in the text without proper explanation and context which make it difficult to gather the main contribution. The language used gets vague in many statements made in the paper. For ex. \\\"Proposition 1. Generally, the posterior probability is as follows\\\".\\n\\n2. A lot of algorithm adaptations are proposed without actually carrying out ablations which make it difficult to discern if the proposed MI maximization is indeed responsible for performance. For ex. \\\"Since the target for critic in the advanced algorithms, like SAC and TD3, is usually estimated pessimistically..\\\". The authors should actually present ablations to support if a pessimistic estimate is indeed required for their adaptation for these methods. \\n\\n3. Why haven't the authors included OAC as a baseline given that it outperforms SAC in several tasks? Further the results show little difference in performance in comparison with DSAC on the mujoco tasks, given that only 5 seeds were used in evaluation, it brings the significance of the results under question. The authors should provide appropriate measures like P-values to support the experiments.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review\", \"review\": [\"The paper proposes an information-theoretic approach to exploration in model-free RL, by encouraging an exploration policy that is maximally informative about the optimal (distributional) value function. The authors discuss tractable approximation to this objective which can be implemented in continuous MDPs. The method is evaluated on benchmark continuous control tasks (Mujoco).\", \"The basic underlying idea is interesting and, to the best of my knowledge, novel. However there are some major issues and/or limitations which should be addressed prior to publication.\", \"**Clarity**\", \"The paper is hard to follow and understand.\", \"There is a use of terminology which might not be clear or known for the general RL/exploration audience (\\\"acquisition functions\\\", \\\"heteroscedastic aleatoric uncertainty\\\"). The entire discussion of the two types of uncertainty seems somewhat disconnected from the method itself. A short explanation of why and how the two types of uncertainty are important *for the exploration problem* would be very helpful.\", \"There are some notation obscurities or inaccuracies. This is most notable in Section 4.1, which is unfortunate since this is where the key ideas of the approach are discussed.\", \"Equation 8, which is central to what follows, is rather confusing. The text mentions that \\\"$\\\\pi_E$ selects action $a_t$ that (...)\\\", but the equations then seems to define an entire policy. And the optimization problem (in the same Eq.) is in itself dependent on $a_t$, so it's not even clear one gets a valid policy/distribution from something like $\\\\pi_E = \\\\arg\\\\max_\\\\pi F^\\\\pi(s_t,a_t)$.\", \"Following the previous point, Eq. 9 is also confusing. It's not clear to me what the authors mean by measuring MI between $Z^*$ and *the pair* $(Z^\\\\pi, \\\\pi)$? It's also not clear what is the meaning of the \\\"posterior distribution\\\" denoted by $p$, and why the mutual information is measured for the Z parameters (return probs) but then re-expressed as the difference in entropies for the policies (action probs).\", \"**Quality**\", \"The paper has a good balance of a theoretically motivated algorithm, a practical implementation of it, and some basic empirical evaluation. Other than clarity issues discussed before, I have some concerns regarding the evaluation, and one more conceptual concern regarding the general idea:\", \"Since exploration here is encouraged by choosing informative actions about Q*, it's not clear that this method will be helpful in very sparse-reward settings (which are a central motivation for sophisticated exploration techniques). Put differently, relying on Q* to guide exploration ultimately couples exploration to the external reward, which seems rather undesirable to me. The method might be helpful from other perspectives (optimization, controlling the level of \\\"over-exploration\\\" etc), but it's not clear that it is helpful as an exploration method per se.\", \"The empirical evaluation is done only against rather limited baselines (basically random exploration baselines). I would encourage the authors to compare their method to other forms of exploration. Particularly relevant to this paper is the work by Houthooft et al. 2016 (VIME) which also uses information-theoretic objective for exploration in continuous problems. The authors should at least cite this paper in their related work section.\", \"Following the last two points, a great improvement could be if other than just demonstrating performance in terms of reward, the authors would evaluate the exploratory behavior itself of an agent trained with their method, in some simple environment (i.e in terms of novel states visited / distance traveled / etc.)\"], \"there_are_some_more_minor_issues_which_should_be_addressed_as_well\": [\"In the abstract: the use of \\\"optimism in the face of uncertainty\\\" is definitely not a \\\"recently\\\"\", \"\\\"The above methods are not efficient\\\" (3rd paragraph, Introduction): This is not accurate. UCB (and other methods) **are** provably efficient for several problems/assumptions.\", \"Some relevant literature is missing from the related work. Most notable is the VIME paper mentioned earlier (Houthooft et al. NeurIPS 2016) and the Fox et al. 2018 ICLR paper (DORA The explorer) which combines counter-based like exploration with an optimism principle for high-dim MDPs.\", \"Section 3.2: $T^\\\\pi$ is **not** the \\\"Bellman optimiality operator\\\" but rather the bellman operator for policy $\\\\pi$.\", \"**Conclusions**\", \"This work has some interesting idea which could be useful for training RL agents in continuous problems. However in the current form of the paper it's hard to evaluate and understand some of the key ideas of the work. Given this, and together with the more conceptual concerns regarding evaluation and the basic approach, I think the paper is not ready for publication.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
JyDnXkeJpjU | Task-similarity Aware Meta-learning through Nonparametric Kernel Regression | [
"Arun Venkitaraman",
"Anders Hansson",
"Bo Wahlberg"
] | This paper investigates the use of nonparametric kernel-regression to obtain a task- similarity aware meta-learning algorithm. Our hypothesis is that the use of task- similarity helps meta-learning when the available tasks are limited and may contain outlier/ dissimilar tasks. While existing meta-learning approaches implicitly assume the tasks as being similar, it is generally unclear how this task-similarity could be quantified and used in the learning. As a result, most popular meta- learning approaches do not actively use the similarity/dissimilarity between the tasks, but rely on availability of huge number of tasks for their working. Our contribution is a novel framework for meta-learning that explicitly uses task-similarity in the form of kernels and an associated meta-learning algorithm. We model the task-specific parameters to belong to a reproducing kernel Hilbert space where the kernel function captures the similarity across tasks. The proposed algorithm iteratively learns a meta-parameter which is used to assign a task-specific descriptor for every task. The task descriptors are then used to quantify the task-similarity through the kernel function. We show how our approach conceptually generalizes the popular meta-learning approaches of model-agnostic meta-learning (MAML) and Meta-stochastic gradient descent (Meta-SGD) approaches. Numerical experiments with regression and classification tasks show that our algorithm outperforms these approaches when the number of tasks is limited, even in the presence of out- lier or dissimilar tasks. This supports our hypothesis that task-similarity helps improve the meta-learning performance in task-limited and adverse settings. | [
"Task-similarity",
"Meta-learning",
"Kernel regression",
"Nonparametric regression",
"Task-descriptors"
] | Reject | https://openreview.net/pdf?id=JyDnXkeJpjU | https://openreview.net/forum?id=JyDnXkeJpjU | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"0eWyU6MOTsj",
"r_4zK9mOoV7",
"uykEiVTo0nV",
"AMTtrhGEeIl",
"-TGlYXb7AK3",
"s9Y6MxazSDG",
"EJv4um4rlZv",
"D_I0_i7WbcD",
"dkQ2zIaS00h",
"mxnqg70QLyV"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040379599,
1606300741720,
1606273215578,
1606273031742,
1606272577086,
1606271693680,
1603878161365,
1603791618421,
1603738332003,
1603297219069
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3525/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3525/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3525/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3525/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3525/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3525/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3525/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3525/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3525/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This paper addresses a method that incorporates the task-similarity (via task gradients) into the meta-learning. The inner loop update is done by kernel regression with the similarity between gradients of tasks considered, and the outer loop is the gradient update with a particular regularization. Without any doubt, it is a timely and important topic to develop a meta-learning method in the presence of outlier tasks. All reviewers criticized the experiments were done on only simple datasets without any ablation study. Authors revised their manuscript, to include more experiments, and tried to clarify the relation of their method to MetaSGD. Unfortunately, however, even after the author response, reviewers were not convinced that their concerns were resolved. In particular, it was claimed that the revised version still lacks comparisons to previous relevant work.\"}",
"{\"title\": \"Ok but not good enough\", \"comment\": \"After reading the authors' response and skimming through the modified writeup, I keep my original score.\\n\\nAuthors still don't compare to the previous, related work (merely list them).\\n\\nThe addition of a simplified version of Omniglot (and another, \\\"real-world\\\" dataset) is a step in a good direction, yet still the experimental sophistication is behind MAML (which used full version of Omniglot and a harder dataset: Miniimagenet for few-shot classification), not to mention its contemporary, derived works.\"}",
"{\"title\": \"Authors' response to AnonReviewer2\", \"comment\": \"**Authors**\\n\\nWe would like to thank the reviewer for evaluating our work and sharing their feedback.\\n\\n**Reviewer**\\n\\n_Principled approach to a challenging and current problem of interest for the sub-field of meta-learning and beyond. However, formalization of meta-learning approaches in terms of the NTK is recent but not novel [see reference Wang 2020 in paper]._\\n\\n_Good work in progress, but the attempt to publish is premature\\nVery poor representation of relevant and conceptually similar recent work, see [1] for a comprehensive review, and specifically [2, 3, 4, 5, 6] for similar approaches._\\n\\n**Authors**\\n\\nThank you for drawing our attention to these valuable references. We have now included them in the manuscript in the proper context. PWe note that the NTK and meta-learning work by Wang et al considers a kernel which comes from an assymptotic analysis considering very wide neural networks (in many cases requiring the network size to tend to infinity). Our approach differs from these works since we does not require such an asymptotic analysis. Our kernel approach also is not restricted to the use of neural networks as predictors and can be applied to all types of the predictors. Further, the NTK uses a specific form of the kernel, whereas our formulation allows for any valid kernel function. We have now included a discussion on this in Section 1.2.\\n\\n**Reviewer**\\n\\n_Inaccurate claims of novelty are made; they must be made more specific and put into context. For example, the claim that task descriptors have not been used in the design of meta-learning algorithms is false, see [5, 6]._\\n\\n**Authors**\\nThank you for the comment. We have modified the appropriate portions of the manuscript. Please see second paragraph on page 6. \\n\\n**Reviewer**\\n\\n_That said, the current approach could be used to analyze such SOTA approaches and perhaps explain their performance._\\n\\n**Authors**\\n\\nThank you for the encouraging comment. We indeed are considering to work along this direction in the future.\\n\\n**Reviewer**\\n\\n_Meta-training data reuse across tasks at test time has been proposed previously, e.g. [7], so it is also not novel to this paper._\\n\\n**Authors**\\n\\nWe agree, and do not claim that data reuse at test time has not been proposed previously. We have also cited the reference [7] pointed out by you.\\n\\n**Reviewer**\\n\\n_Very weak experimental evidence. Please use some of the few-shot image classification datasets, or standard RL tasks available since MAML was published._\\n\\n**Authors**\\n\\nThank you for the comment. After taking all the reviews carefully into consideration, we have now included multiple new experiments on real-world regression tasks and on few-shot learning for the Omniglot dataset. Please see the newly added paragraphs on Experiments 3 and 4 in Section 4 of the revised manuscript. We have also included new sections discussing some aspects of our approach, and its explicit relation to MAML and Meta-SGD in the Appendix\\n\\n**Reviewer**\\n\\n_Proposed method needs extensive approximations to scale up to more interesting problems._\\n\\n**Authors**\\n\\nWe fully agree. As discussed in the manuscript earlier, the framework of kernel regression requires that all tasks are taken together and not sequentially, making it necessary to have approximations at higher dimensions. However, our focus was on investigating the merit of using task-similarity in the limited task setting. As a result, no approximations were found necessary or used in our experiments in this setting.\"}",
"{\"title\": \"Authors' response to AnonReviewer1\", \"comment\": \"**Authors**\\n\\nWe would like to thank the reviewer for evaluating our work and sharing their feedback.\\n\\n**Reviewer**\\n\\n_In my opinion... Comparison to and commentary on the previous metric-based meta-learning papers present in the work under review is unsatisfactory_\\n\\n**Authors**\\n\\nTo the best of our knowledge, most of the existing kernel and metric-learning based methods deal exclusively with classification and image recognition/ few-shot learning settings, whereas our goal is to develop a general kernel-based formulation that is applicable to any meta-learning modality. In most cases, the use of kernel regression has been to describe similarity across the datapoints within a class or a task. In Achille et al. 2019, a separate and specific probe network is employed to extract features that are suited to visual classification tasks $-$ the task similarity is then used to select the best model from the existing set of models of the training tasks to describe the new task. This is different from our approach where parameters of a new task are obtained through similarity with training tasks, and not by setting the parameter value to that of one of the similar training tasks. Our task-descriptor comes directly from the task and model, without requiring an additional feature extraction network. \\n\\t\\t\\nWe have now included additional relevant works in the related works, and also contrasted our approach with a more recent work on neural tangent kernels in meta-learning. Please see Section 1.2 of the revised manuscript. We agree that a more detailed comparison to existing metric learning approaches would throw further perspective and understanding of our approach and more generally on the use of metric-learning in meta-learning. We intend to pursue further research along these lines this in the future.\\n\\n**Reviewer**\\n\\n_The performed experiments are extremely toyish... authors need to settle with optimization tricks to learn their TANML model._\\n\\n**Authors**\\n\\nThank you for the comment. We have now included additional experiments on real-world data for both regression and classification tasks. Please see the newly added paragraphs on Experiments 3 and 4 in Section 4 of the revised manuscript. \\nSince we consider experiments in the regime of limited number of tasks in our experiments, no approximations or optimization tricks were involved or found necessary in learning our model.\\n\\n**Reviewer**\\n\\n_I recommend against publication at ICLR...an expectation of extensive experimental evidence which this paper is lacking._\\n\\n**Authors**\\n\\nWe thank you for your valuable feedback. We have now included several new experiments on real-world regression tasks, and also on the Omniglot dataset. Please see the newly added paragraphs on Experiments 3 and 4 in Section 4 of the revised manuscript. We have also included new sections on ablation study of some aspects of our approach, and its explicit connections with Meta-SGD and MAML. Please see Section A of the appendix.\\n\\n**Reviewer**\\n\\n_Suggestion: To expand the research to higher-dimensional,... and see if an introduction of a kernel gives a measurable improvement._\\n\\n**Authors**\\n\\nThank you for the very insightful suggestion of incorporating of a hard-coded kernel with domain-specific intuition/understanding of tasks. Our work here presents a first step towards introduction of similarity kernels in meta-learning, our goal being to arrive at a consistent and general notion of kernel and task-descriptor. Hence, we opted the use of general kernels that appear from the formulation. However, we fully agree with the observation of the reviewer and in future work, we will pursue the use of alternative task-descriptors or kernels that aid in better scaling to higher dimensions through the incorporation of human understanding. This will include the use of domain-specific features/metrics as in the case of task2vec (Achille et al. 2019) and other recent works.\\n\\n**Reviewer**\\n\\n_\\\"Training for tasks individually will result in a predictor that overfits to,... and generalizes poorly\\\": It's unclear what \\\"individually\\\" means here...I encourage authors to avoid using such statements without referring to argumentation behind them._\\n\\n**Authors**\\nThank your for drawing our attention to this. The better word to describe is 'independently' and we have now changed it accordingly. Please see revised Section 1.1.\\n\\n**Reviewer**\\n\\n_I am also confused by the sentence ...isn't the (cited by authors) Achille et al. (2019) one example of such work?_\\n\\n**Authors**\\n\\nThank you for pointing this out. We have now modified the sentence. Please see the second paragraph on page 6.\\n\\n**Reviewer**\\n\\n_Along the whole paper, \\\\citep is used, even when \\\\citet is appropriate. See when to use each one here._\\n\\n**Authors**\\n\\nThank you for pointing out the error. We have now modified the citation at the appropriate instances. \\n\\n**Reviewer**\\n\\n_TANML in Sec. 4._\\n\\n**Authors**\\n\\nThank you, it is now rectified.\"}",
"{\"title\": \"Authors' response to AnonReviewer4\", \"comment\": \"**Authors**\\n\\nWe would like to thank the reviewer for evaluating our work and sharing their feedback.\\n\\n**Reviewer** \\n\\n_Unfortunately, the relationship between Generalized Meta-SGD to TANML is unclear... it is unclear how TANML relates to Meta-SGD.... For instance, if TANML is a generalization of MAML, it would be good to state with which particular choices_ $\\\\theta_0$ and $\\\\pmb\\\\Psi$, _we can recover MAML._\\n\\n**Authors** \\n\\nWe thank you for the valuable feedback. We have now included a discussion that brings out the explicit relationship between TANML, Meta-SGD, and MAML. Please see the last sentence of the paragraph following Eq (2), and the new Section A of the Appendix.\\n\\n**Reviewer** \\n\\n_The related work section is quite minimalistic. For instance, discussing how TANML is different from e.g. multi-task nonparametric methods (e.g. [1-2]) that also use a kernel between tasks, would better clarify how TANML relates to previous work._\\n\\n**Authors**\\n\\n Thank you for the comment. Multi-task nonparametric methods address an entirely different setting $-$ for a given input $x$, the goal is to model the associated vector target $\\\\mathbf{y}$ using kernel regression/Gaussian process. Each component of the vector target is referred to as a 'task', and multi-task learning thus deals with predicting different variables or components at once using mutual correlation in form of matrix kernels. In contrast, in the meta-learning setting, a task refers to a learning problem by itself with its associated input-output data ( as we have mentioned in Section 1.1). In our approach, the kernel regression adaptation is used to predicts the optimal parameters $\\\\pmb\\\\theta$ for a given task by taking the gradient of the loss function $\\\\nabla\\\\mathcal{L}$ as the input variable to the kernel. Since multi-task learning and our approach address completely different problems even in terms of what they refer to as tasks, we have not included the works on multi-task learning in the related works.\\n\\tHowever, we have now included multiple new relevant works under the related works. In particular, we have discussed our approach in the context of a recent related work on kernels in meta-learning. Please see Section 1.2 of the revised manuscript.\\n\\n**Reviewer**\\n\\n_The numerical experiments are very simple / limited and designed in a pathological way....\\nA real-world use case in which we expect to see a meta-training set with e.g. outliers similar to experiment 2._\\n\\n**Authors**\\n\\nThank you for the comment. We have now included new experiments on real-world regression datasets and on few-shot learning. Please see the newly added paragraphs on Experiments 3 and 4 in Section 4 of the revised manuscript. We have also added the missing information regarding the numerical experiments in the main text and in the appendix.\\n\\n**Reviewer**\\n\\n_Experiments with real world meta-learning datasets. For real-world \\\\& small-scale meta-learning environments for regression, see e.g._[3].\\n\\n**Authors**\\n\\nThank you for the suggestion. We have now included experiments on time-series prediction for the Physionet 2012 Challenge dataset for real-world regression tasks as recommended by you, and on the Omniglot dataset. Please see the newly added paragraphs on Experiments 3 and 4 in Section 4 of the revised manuscript.\\n\\n**Reviewer**\\n_An additional meta-learning setup without outliers / clusters of meta-learning tasks. This way one can assess how the proposed method compares to MAML/Meta-SGD in standard setting_\\n\\n**Authors**\\n\\nThank you for the suggestion. New experiments on real-world datasets for both regression and classification that are now included, and are performed without the presence of outliers or explicit clusters. Please see the newly added paragraphs on Experiments 3 and 4 in Section 4 of the revised manuscript.\\n\\n**Reviewer** \\n\\n_Adding missing details, e.g. to the appendix, which are necessary for reproducing the experiment._\\n\\n**Authors**\\n\\n Thank you, we have now added the missing details in the Appendix.\\n\\n**Reviewer**\\n\\n_Overall assessment: I vote for rejecting the paper... Overall, TANML has scientific merit - when introduced with a convincing storyline and properly supported by realistic experiments and relevant baseline comparisons, this would be a clear accept._\\n\\n**Authors**\\n\\nWe thank you for the detailed assessment. We have now included new experiments on real-world data including the Omniglot dataset. We have also discussed the explicit connection of TANML to Meta-SGD and MAML, and included an ablation study of some of the aspects of our approach. Please see the newly added paragraphs on Experiments 3 and 4 in Section 4, and Section A of the Appendix in the revised manuscript.\\n\\n**Reviewer** \\n\\n_=== Minor remarks ===_\\n\\n**Authors** \\n\\nThank you for giving it a careful consideration. All the minor remarks have been fixed and we have carefully gone through the manuscript for other typos.\"}",
"{\"title\": \"Author's response to AnonReviewer3\", \"comment\": \"**Authors**\\n\\nWe would like to thank the reviewer for evaluating our work and sharing their feedback.\\n\\n**Reviewer**\\n\\n_While the experiments show some promise for the method, these on simplistic datasets involving synthetic datasets for estimating randomized linear and sinusoid predictors. Given that the paper discusses MAML and Meta-SGD in some detail for setting up the new method, experiments on the Omniglot and MiniImagenet datasets considered in both those papers would help to better evaluate the proposed approach_\\n\\n**Authors**\\n\\nWe thank you for the detailed assessment. We have now included new experiments on the Omniglot dataset. Due to limitations on available computational resources, we are yet unable to perform the experiments on the miniImagenet dataset. We have also included new experiments on various real-world regression tasks in the revised manuscript. Please see the newly added paragraphs on Experiments 3 and 4 in Section 4 of the revised manuscript.\\n\\n**Reviewer**\\n\\n_The paper has no ablations or analysis for particular parts of their method, such as removing the gradient from the kernel function or removing the regularization term from the outer loop. Thus even on the simplistic datasets considered, it is hard to judge which aspects of the method make it work better._\\n\\n**Authors**\\n\\nWe thank you for the suggestion. We have now included new section on Ablation study. Please see Section 4 and Section A of the Appendix.\\n\\n**Reviewer**\\n\\n_I am willing to increase my score if the authors include experiments on the datasets mentioned above, and include additional analysis/ablations of their method_\\n\\n**Authors**\\n\\nWe thank you for your valuable feedback. We have now included new experiments on real-world regression tasks, and also on the Omniglot dataset. Please see newly added paragraphs on Experiments 3 and 4 in Section 4 of the revised manuscript. We have also included new sections analyzing our approach and its explicit connections with Meta-SGD and MAML. Please see Section A of the appendix.\"}",
"{\"title\": \"Interesting theoretical formulation, but insufficient experimental analysis\", \"review\": \"This paper proposes a theoretical formulation for meta-learning that uses task similarity based on task gradients, which helps learning in the presence of outlier tasks. The inner loop parameter update is given by linear kernel regression, where the kernel function computes similarity between gradients of different tasks. While the paper includes experiments that outperform MAML and Meta-SGD on estimating randomized linear predictors, and randomized sinusoids with outlier data-points, these are not sufficient to establish the efficacy of the approach.\", \"pros\": \"1. This paper proposes a solution to the problem of meta-learning with dissimilar tasks, which is a central problem in meta-learning. The formulated approach is a generalization of MAML and Meta-SGD, as the update direction isn't necessarily the gradient direction, and there is an additional regulation term on meta-parameters in the outer loop. Tasks with similar gradients have a similar update in the meta-learning inner loop. \\n\\n2. The formulation involving linear kernel regression which enables including the task similarity in the kernel function seems novel and could provide the basis for subsequent work in the field for dealing with dissimilar tasks. The usage of similarity between task gradients to guide updates is similar in spirit to gradient projection techniques used in continual learning to avoid catastrophic forgetting. \\n\\n3. The included experiments do show much superior performance to MAML and Meta-SGD on the datasets considered, which included tests with outlier data points for sinusoid regression.\", \"cons\": \"1. While the experiments show some promise for the method, these on simplistic datasets involving synthetic datasets for estimating randomized linear and sinusoid predictors. Given that the paper discusses MAML and Meta-SGD in some detail for setting up the new method, experiments on the Omniglot and MiniImagenet datasets considered in both those papers would help to better evaluate the proposed approach. \\n\\n2. The paper has no ablations or analysis for particular parts of their method, such as removing the gradient from the kernel function or removing the regularization term from the outer loop. Thus even on the simplistic datasets considered, it is hard to judge which aspects of the method make it work better. \\n\\nI am willing to increase my score if the authors include experiments on the datasets mentioned above, and include additional analysis/ablations of their method.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Overall a sound algorithm, but framed/motivated in a strange way and hardly supported by relevant/realistic experiments\", \"review\": \"=== Summary ===\\n\\nThe paper proposes a meta-learning method based on a notion of task similarity/dissimilarity. In particular, the paper motivates its proposed method TANML through a generalization of Meta-SGD wherein the learnable parameter-wise learning rate in the inner update of Meta-SGD is replaced by a quadratic pre-conditioner matrix.\\n\\nThe proposed method TANML closely resembles gradient-based meta-learners in the outer update but replaces the inner update by the matrix-vector product of kernel regression coefficient matrix and task similarity vector based on a kernel function. In that, the kernel function effectively quantifies the similarity of the loss gradients of the different tasks, evaluated at a learnable parameter initialization. Overall, the coefficient matrix can be understood as a look-up matrix in which each row holds the learned parameter vector for one meta-training task, the final adapted parameters are a linear combination of these parameter vectors in, weighted by the kernel between current task and the meta-train tasks.\\n\\nIn two simple simulated experiments, the paper demonstrates that TANML is able to outperform MAML and Meta-SGD when the meta-train tasks are set up in a pathological way (e.g. by combining two dissimilar clusters of tasks or by adding outlier tasks).\\n\\n=== Reviewer\\u2019s main argument ===\\n\\nOverall, the idea of incorporating a notion of task-similarity into the meta-learner and the particular proposal to use the kernel between the task loss gradients to quantify such similarity is sound and is a valuable contribution in itself. \\n\\nUnfortunately, the relationship between Generalized Meta-SGD to TANML is unclear. Usually the connection between linear regression (c.f. Eq. 1 in the paper) and kernel regression (Eq. 2) is established through the particular form of the kernel regression coefficients. However, since the coefficient matrix is (meta-)learned in the paper, it is unclear how TANML relates to Meta-SGD. In fact, TANML seems more like a learned linear combination of task parameters which does not resemble much commonalities with MAML. Overall, the connection to MAML seems a bit set-up/artificial. Discussing the particular relationship between MAML/Meta-SGD and TANML would improve the storyline of the paper. For instance, if TANML is a generalization of MAML, it would be good to state with which particular choices of $\\\\theta_0$ and $\\\\Psi$, we can recover MAML.\\n\\nThe related work section is quite minimalistic. For instance, discussing how TANML is different from e.g. multi-task nonparametric methods (e.g. [1-2]) that also use a kernel between tasks, would better clarify how TANML relates to previous work.\\n\\nThe numerical experiments are very simple / limited and designed in a pathological way. Thus, it is not surprising that MAML/Meta-SGD perform worse than TANML. How applicable the experimental results are in more realistic meta-learning setups is unclear. Despite the simplicity of the experiments, there is not enough information to properly reproduce the experiment. For instance, how are A and $\\\\omega$ in experiment 2 sampled, how are the x in experiment 1 sampled and how many data points per task are used in experiment 1? The following would strengthen the experiment section:\\n- A real-world use case in which we expect to see a meta-training set with e.g. outliers similar to experiment 2\\n- Experiments with real world meta-learning datasets. For real-world & small-scale meta-learning environments for regression, see e.g. [3].\\n- An additional meta-learning setup without outliers / clusters of meta-learning tasks. This way one can assess how the proposed method compares to MAML/Meta-SGD in standard setting\\n- Adding missing details, e.g. to the appendix, which are necessary for reproducing the experiment.\\n\\n=== Overall assessment ===\\n\\nI vote for rejecting the paper. In the current state, the storyline from MAML to TANML provides little value to me as a reader. The proposed algorithm resembles a classical kernel-weighted linear combination of parameters and the pathological toy experiments provide little value for assessing the actual usefulness of TANML in realistic meta-learning scenarios. However, using the kernel between the task loss gradients as a similarity metrics of task is a nice idea and is a valuable contribution. I highly encourage the authors to further improve the paper. Overall, TANML has scientific merit - when introduced with a convincing storyline and properly supported by realistic experiments and relevant baseline comparisons, this would be a clear accept.\\n\\n=== Minor remarks ===\\n\\n- Section 2: Eq. 1: move the comma. It should be $[\\\\theta_0^\\\\top, \\\\nabla_{\\\\theta_0} \\\\mathcal{L}$ \\u2026\\n- Section3: Either the $\\\\Psi$ should be a $T \\\\times D$ matrix, or there should be no transpose in Eq. 2\\n- Section 3 Eq 2: The kernel in the sum should probably be between i and i\\u2019, not between i and i.\\n- Section 4.1, 2nd paragraph: \\u201c... could be ascribed [to] its linear nature \\u2026\\u201d\\n\\n\\n[1] Bonilla, Edwin V., Kian M. Chai, and Christopher Williams. \\\"Multi-task Gaussian process prediction.\\\" Advances in neural information processing systems. 2008.\\n\\n[2] Micchelli, Charles A., and Massimiliano Pontil. \\\"Kernels for Multi--task Learning.\\\" Advances in neural information processing systems. 2005.\\n\\n[3] Rothfuss, Jonas, Vincent Fortuin, and Andreas Krause. \\\"PACOH: Bayes-Optimal Meta-Learning with PAC-Guarantees.\\\" arXiv preprint arXiv:2002.05551 (2020).\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Simple idea, results only in toy settings\", \"review\": \"The paper introduced a meta-learning framework in which a kernel describing similarity between the tasks is used to construct an RKHS which is used to perform kernel regression. The framework is instantiated in a form of an algorithm: TANML which can be viewed as an extension to a popular Meta-SGD algorithm. The experiments on two regression tasks are presented to analyse the efficacy of the proposed method.\\n\\n1. I consider the method mathematically sound, ie. I don't see theoretical reasons which would make it obvious that it wouldn't work.\\n2. The combination of using task-similarity and kernels is not present in the literature known to me, so the work under review contains (some elements of) novelty.\\n3. However, both \\\"explicitly employing task-similarity\\\" (Achille et al. 2019) and \\\"using kernel methods\\\" (Vinyals et al. 2016) (separately) is well represented in past works.\\n4. In my opinion moving kernels from space of images/classes (like in Vinyals and other kernel methods cited in the paper) to the space of tasks doesn't, on its own, demonstrate the level of novelty that is required by accepted papers. Comparison to and commentary on the previous metric-based meta-learning papers present in the work under review is unsatisfactory.\\n5. The performed experiments are extremely toyish: the results are not sufficient to support the claims of the paper. This is further exacerbated by the fact that authors need to settle with optimization tricks to learn their TANML model.\\n\\nI recommend against publication at ICLR. The novelty of the authors' work is limited and the experiments are not convincing. I appreciate that the goal of the work is not to beat SOTA, and rather to introduce and investigate a small change to MAML, but I believe that reducing the scope of the research in this way grants an expectation of extensive experimental evidence which this paper is lacking.\", \"suggestion\": \"To expand the research to higher-dimensional, few-shot classification tasks, one could hardcode the kernel based on the human understanding of the classes. This way, one could take a problem which Meta-SGD is known to be working with and see if an introduction of a kernel gives a measurable improvement.\", \"technical\": \"1. \\\"Training for tasks individually will result in a predictor that overfits to $\\\\mathcal{X}$, $\\\\mathcal{Y}$, and generalizes poorly\\\": It's unclear what \\\"individually\\\" means here (is MAML with batch size = 1 training for tasks individually?). It's also far from obvious, in particular in the context of Raghu et al. (Rapid Learning or Feature Reuse?). I encourage authors to avoid using such statements without referring to argumentation behind them.\\n2. I am also confused by the sentence \\\"while there have been studies on deriving features and metrics for understanding the notion of similarity between data sources or datasets, they have not been used in the actual design of meta-algorithms\\\": isn't the (cited by authors) Achille et al. (2019) one example of such work?\\n3. Along the whole paper, \\\\citep is used, even when \\\\citet is appropriate. See when to use each one [here](https://www.reddit.com/r/LaTeX/comments/5g9kn1/whats_the_difference_between_cite_citep_and_citep/).\\n4. TA*N*ML in Sec. 4.\\n5. .. realizations of tasks is reported **in** Table **2**. In Sec. 4.2.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Good work in progress but further work is needed, both theoretical and experimental\", \"review\": \"---- Update ----\\n\\nI thank the authors for clarifications. I trust that the suggestions of all reviewers, taken together, provide substantial avenues for improving the work. However, at this point I must keep my score and encourage the authors to continue the work with the valuable honest feedback provided here.\\n\\n---- Original Review ----\", \"summary\": \"The paper aims to formalize \\u201ctask similarity\\u201d in meta-learning settings by making use of nonparametric kernel regression techniques; such similarity information is then proposed as a means to alleviate some of the current issues with meta-learning algorithms such as MAML/Meta-SGD, namely reliance on large sets of similar meta-training tasks. Experiments focus on standard toy regression tasks with the added meta-training data scarcity.\", \"strong_points\": [\"Principled approach to a challenging and current problem of interest for the sub-field of meta-learning and beyond. However, formalization of meta-learning approaches in terms of the NTK is recent but not novel [see reference Wang 2020 in paper].\", \"Good work in progress, but the attempt to publish is premature.\"], \"weak_points\": [\"Very poor representation of relevant and conceptually similar recent work, see [1] for a comprehensive review, and specifically [2, 3, 4, 5, 6] for similar approaches.\", \"Inaccurate claims of novelty are made; they must be made more specific and put into context. For example, the claim that task descriptors have not been used in the design of meta-learning algorithms is false, see [5, 6]. That said, the current approach could be used to analyze such SOTA approaches and perhaps explain their performance.\", \"Meta-training data reuse across tasks at test time has been proposed previously, e.g. [7], so it is also not novel to this paper.\", \"Very weak experimental evidence. Please use some of the few-shot image classification datasets, or standard RL tasks available since MAML was published.\", \"Proposed method needs extensive approximations to scale up to more interesting problems.\"], \"recommendation_and_rationale\": \"I believe the paper should be rejected in current form, but I strongly encourage the authors to add more experimental data and submit to a workshop.\", \"references\": \"[1] Meta-Learning in Neural Networks: A Survey\\nTimothy Hospedales, Antreas Antoniou, Paul Micaelli, Amos Storkey. https://arxiv.org/pdf/2004.05439.pdf\\n[2] Recasting Gradient-Based Meta-Learning as Hierarchical Bayes\\nErin Grant, Chelsea Finn, Sergey Levine, Trevor Darrell, Thomas Griffiths. https://arxiv.org/abs/1801.08930\\n[3] Bayesian Model-Agnostic Meta-Learning. Jaesik Yoon, Taesup Kim, Ousmane Dia, Sungwoong Kim, Yoshua Bengio, Sungjin Ahn. https://papers.nips.cc/paper/7963-bayesian-model-agnostic-meta-learning.pdf \\n[4] Probabilistic Model-Agnostic Meta-Learning. Chelsea Finn, Kelvin Xu, Sergey Levine. http://papers.nips.cc/paper/8161-probabilistic-model-agnostic-meta-learning\\n[5] Meta-Learning with Latent Embedding Optimization. Andrei A. Rusu, Dushyant Rao, Jakub Sygnowski, Oriol Vinyals, Razvan Pascanu, Simon Osindero, Raia Hadsell. https://arxiv.org/abs/1807.05960\\n[6] Few-Shot Image Recognition by Predicting Parameters from Activations. Siyuan Qiao, Chenxi Liu, Wei Shen, Alan Yuille. https://arxiv.org/abs/1706.03466\\n[7] Meta-Q-Learning. Rasool Fakoor, Pratik Chaudhari, Stefano Soatto, Alexander J. Smola. https://arxiv.org/abs/1910.00125\", \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
HCa8gC_COVk | Mutual Calibration between Explicit and Implicit Deep Generative Models | [
"Qitian Wu",
"Rui Gao",
"Hongyuan Zha"
] | Deep generative models are generally categorized into explicit models and implicit models. The former defines an explicit density form that allows likelihood inference; while the latter targets a flexible transformation from random noise to generated samples. To take full advantages of both models, we propose Stein Bridging, a novel joint training framework that connects an explicit (unnormalized) density estimator and an implicit sample generator via Stein discrepancy. We show that the Stein bridge 1) induces novel mutual regularization via kernel Sobolev norm penalization and Moreau-Yosida regularization, and 2) stabilizes the training dynamics. Empirically, we demonstrate that Stein Bridging can facilitate the density estimator to accurately identify data modes and guide the sample generator to output more high-quality samples especially when the training samples are contaminated or limited. | [
"deep generative models",
"generative adversarial networks",
"density estimation"
] | Reject | https://openreview.net/pdf?id=HCa8gC_COVk | https://openreview.net/forum?id=HCa8gC_COVk | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"3KB34Iv4UG4",
"T7TtwMs_LUy",
"orqU_YRUHTm",
"srAqictryu",
"c_W11bDa70T",
"S9q1AcYRWt"
],
"note_type": [
"decision",
"comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040513390,
1605351762772,
1603980524647,
1603976929795,
1603903869414,
1603773782175
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"~Jianwen_Xie1"
],
[
"ICLR.cc/2021/Conference/Paper3523/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3523/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3523/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3523/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This paper presents \\\"stein bridge\\\", a joint training framework that connects an explicit (unnormalized) density estimator and an implicit sample generator via Stein discrepancy. The idea and methodology are valid and of interest. But the raised concerns were not properly addressed.\"}",
"{\"title\": \"missing related works about EBMs\", \"comment\": \"Dear Authors and Reviewers,\\n\\nWe found that the current paper missed some important references about pioneering works that are related to energy-based generative models parameterized with deep net energy.\\n\\nThe first paper that proposes to train an energy-based model parameterized by modern deep neural network and learned it by Langevin based MLE is in (Xie. ICML 2016) [1]. The model is called generative ConvNet, because it can be derived from the discriminative ConvNet. This is also the first paper to formulate modern ConvNet-parametrized EBM as exponential tilting of a reference distribution, and connect it to discriminative ConvNet classifier. That is, EBM is a generative version of a discriminator. (Xie. ICML 2016) [1] originally studied such an EBM model on image generation theoretically and practically in 2016.\\n\\n(Xie. CVPR 2017) [2] (Xie. PAMI 2019) [3] proposed to use Spatial-Temporal ConvNet as the energy function in EBMs for video generation. The model is called Spatial-Temporal generative ConvNet.\\n\\n(Xie. CVPR 2018) [4] also proposed to use volumetric 3D ConvNet as the energy function for 3D shape pattern generation. It is called 3D descriptor Net.\\n\\nAlso, the Generative Cooperative Nets (CoopNets) (Xie. PAMI 2018)[5] and (Xie. AAAI 2018) [6], which jointly trains an EBM and a generator network by MCMC teaching.\\n\\nThose are the more original and earlier papers for deep EBMs with ConvNet as energy function than what you have cited, e.g., [7](Yilun Du and Igor Mordatch, 2019).\", \"references\": \"[1] A Theory of Generative ConvNet. Jianwen Xie *, Yang Lu *, Song-Chun Zhu, Ying Nian Wu (ICML 2016)\\n\\n[2] Synthesizing Dynamic Pattern by Spatial-Temporal Generative ConvNet Jianwen Xie, Song-Chun Zhu, Ying Nian Wu (CVPR 2017)\\n\\n[3] Learning Energy-based Spatial-Temporal Generative ConvNet for Dynamic Patterns Jianwen Xie, Song-Chun Zhu, Ying Nian Wu IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) 2019\\n\\n[4] Learning Descriptor Networks for 3D Shape Synthesis and Analysis Jianwen Xie *, Zilong Zheng *, Ruiqi Gao, Wenguan Wang, Song-Chun Zhu, Ying Nian Wu (CVPR) 2018\\n\\n[5] Cooperative Training of Descriptor and Generator Networks. Jianwen Xie, Yang Lu, Ruiqi Gao, Song-Chun Zhu, Ying Nian Wu. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) 2018\\n\\n[6] Cooperative Learning of Energy-Based Model and Latent Variable Model via MCMC Teaching. Jianwen Xie, Yang Lu, Ruiqi Gao, Ying Nian Wu. AAAI 2018.\\n\\n[7] Yilun Du and Igor Mordatch. Implicit generation and modeling with energy based models. In Advances in Neural Information Processing Systems, pages 3603\\u20133613, 2019\\n\\nThank you!\"}",
"{\"title\": \"Stein bridge is proposed to facilitate training both implicit and explicit models\", \"review\": \"In this paper, the task is to train an implicit and an explicit model simultaneously via GAN setting and a new regularizer called \\\"stein bridge\\\", which is constructed from the kernel Stein discrepancy between the implicit and explicit models. The idea of adding such regularization, with the notion of mutual regularization of two models, is interesting. The proposed regularization term is clearly presented, the illustration of stablizing the training procedure, and the empirical results are clearly shown and discussed. The sample quality from the generative models are compared.\\n\\nThere are some parts that remain unclear or can be further emphasized. \\n It is said that training both explicit and implicit densities are more helpful to the whole procedure. Despite the cited literature reviews, it is unclear to me, in this paper presentation, why is this so?\\nIn the paper, the implicit network is parameterized by \\\\theta as G_{\\\\theta} while the explicit EBM is parametrized by \\\\phi, as p_{\\\\phi}.\", \"before_the_stein_bridge_is_introduced\": \"How do \\\\theta and \\\\phi interact? From figure1 it seems they do not interact during training but only coupled via the objective. In addition, which of the model (implicit or explicit) is used as the final outcome?\", \"after_the_stein_bridge_is_introduced\": \"stein bridge tries to minimize the Stein discrepancy between implicit and explict models. How is the EBM chosen so that the density class is rich enough? How are \\\\lambda_1 and \\\\lambda_2 chosen to balance three terms?\\n\\nHow does the training of generative model compared to the learning procedure in \\\"Deep energy estimator networks.\\\" Saremi, Saeed, et al. 2018, which learns a generative model from score-matching based criterion?\\n\\nThanks for the presentation.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"The paper gives a solid and comprehensive analysis and implementation of the idea of Stein bridge, but the motivation may need further discussion.\", \"review\": [\"Pros:\", \"The idea to bind an explicit and implicit generative model and study the effect on both models is a valid research topic.\", \"The method seems novel and inspiring, and the paper also shows a theoretical understanding of the proposed method, which is technically nontrivial.\", \"The presentation is clear and pleasing (e.g., content organization, background, Fig. 1). The paper also includes a detailed review on existing works.\", \"The paper presents comprehensive experiments, and the results are promising.\"], \"cons\": [\"On the motivation.\", \"Would it be too costly to train two models for one task just to alleviate the problem of one model? It seems to hide the problems of each model and serves as a black-box solution. The theoretical analysis makes things better, but the explanations may seem to be like \\\"side effects\\\" but not a direct solution targeting on the problems. Moreover, both the explicit and implicit models have the same amount of knowledge from data: one model cannot provide more information to the other model beyond the training dataset. How to understand the improvement under this perspective?\", \"For training an explicit model, the mode-collapse behavior may be due to the usage of the Stein discrepancy. Training by maximizing likelihood (i.e., minimizing forward KL divergence) via classical methods e.g. contrastive divergence may already circumvent this problem.\", \"On the theory. It may be better to explain why \\\"By smoothing the Stein critic, the Stein bridge encourages the energy model\", \"to seek more modes in data instead of focusing on some dominated modes\\\". How does it make the Stein discrepancy more picky on missing a mode?\", \"On the experiments.\", \"I see in the supplement that a hyperparameter searching is conducted, but I did not find the metric to select them. Is it done by AUC / IS / FID / MMD / HSR / manual visualization evaluation? Results of \\\"WGAN + something\\\" may be sensitive to hyperparameters and maybe they should not be worse than the vanilla WGAN (taking zero regularization).\", \"In Figs. 4 and 5, how are the density estimation of implicit models GAN/WGAN and samples of explicit models DEM/DGM got visualized? Do they rely on techniques like kernel density estimation or MCMC? If yes, how to make sure these techniques do not affect the outcome?\", \"The definition of High-quality Sample Rate (HSR) may value mode collapse, since it gives a high score if all generated samples are on the center of one mode. For the result in Fig. 8(a), may be the HSR needs to drop to be faithful to data.\", \"The experiment is comprehensive and shows the desired improvements. But maybe a comparison with other explicit model training methods (contrastive divergence, annealed Langevin dynamics, etc.) that avoid mode-collapse is also needed.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Very similar to a NeurIPS-19 paper\", \"review\": \"This paper adopted Stein's method to connect an explicit density estimator and an implicit sample generator to propose an objective function for deep generative learning.\\nThis paper is written well and the organization is very clear.\\nHowever, this paper is very similar to a NeurIPS-19 paper (Two Generator Game: Learning to Sample via Linear Goodness-of-Fit Test).\\nBesides, the authors didn't cite this paper in the current submission.\\n\\nFirst, the top-level idea is the same.\\n1. They all adopted two generative models. One is explicit and the other is implicit.\\n2. They all adopted Stein's method to connect these two generative models.\\n\\nSecond, important technical details are similar.\\n1. They all used energy-based model in the explicit part.\\nThe energy-based model is used to mimic the underlying distribution of the real data.\\n2. They all used Stein's method to avoid solving the normalization constant.\\nStein's method is a likelihood-free method that depends on the distribution only through logarithmic derivatives.\\nWhen taking derivatives, the normalization constant will be eliminated.\\n\\nThird, these two papers have the same target.\\nThey hope the explicit generative model characterize the formulation of the distribution and\\nthe implicit one produces vivid or genuine-looking images.\\n\\nThe novel part of this submission is the introduction of kernel Sobolev dual norm and Moreau-Yosida regularization.\\n\\nThere is a big gap between the optimization formulations in Equation (3), Theorem 1 and Theorem 2 and the experimental results shown in Section 5.\\nBesides, there are no open source codes provided.\\nIt is very hard for me to figure out the details of the experiments and meantime to check the reproducibility of this paper.\\n\\nIn summary, I hope that the authors will correctly cite the closely related NeurIPS-19 paper,\\nand clearly demonstrate their completely new contributions as compared to the NeurIPS-19 paper.\\n\\nSince ICLR is a highly selective conference,\\nthe originality and significance of one submission will always be in the first priority.\\nAlthough the writing quality of this paper is good, I cannot accept this paper in current state.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Interesting paper with some gaps in theoretical analysis\", \"review\": \"Summary of the paper:\\n\\nThe author proposed a novel regularization technique for jointly training of implicit model (IM) and explicit model (EM). This is achieved by connecting the training of IM and EM via Stein discrepancy (SD), which is called Stein bridging. \\n\\nThe author claimed this regularization can (i) smooth the Wasserstein critic by kernel Sobolev dual norm (ii) smooth the Stein critic by Moreau-Yoshida regularization (iii) stabilize the training of IM. \\n\\nThe author also theoretically proves the (i), (ii) under Wasserstein distance, and (iii) for a simple toy case. The author empirically evaluated the EM by inspecting the mode coverage of toy example, ranking digit and OOD detection on more complex data sets. \\n\\n-----\", \"reviews\": \"\", \"clarity\": \"The main text of this paper is in general clearly written and easy to follow. However, I have some concerns related to the theoretical analysis in the Appendix.\", \"novelty\": \"This regularization technique seems to be novel to the best of my knowledge. Although the idea is simple, the author provided some theoretical analysis to back it up. However, it is not enough to fully back the claims made in the main text. Details later.\", \"technical_soundness_and_concerns\": \"I have some concerns related to the SD and proof for theorem 1 and 2.\\n1. For SD, the author mentioned that for Stein critic $f_s(\\\\pmb{x})$, it is not necessary $\\\\mathbb{R}^d\\\\rightarrow \\\\mathbb{R}^d$. One can specify a lower dimension $d'<d$ and make $f_s:\\\\mathbb{R}^d\\\\rightarrow \\\\mathbb{R}^{d'}$ as long as $f_s$ belongs to the Stein class. Indeed, this is true for Stein's identity (see Def 2.1 in Liu's paper). However, this does not mean it defines a valid discrepancy measure. The original SD in Gorham's paper assumes $d'=d$ and the trace operator is used to transform $d\\\\times d$ Stein identity to the scalar value. In that case, Gorham proves its validity by investigating its weak convergence property. My concern is I cannot see the direct generalization from the trace operator to other matrix operators like the ones used in this paper with $d'=1$. In other words, I agree that for two distributions $p$,$q$, when $p=q$, the SD defined in this paper is $0$, but not vice versa. Could the author point out any references or provides any details on the validity of the proposed SD?\\n2. I do not fully follow the derivations in Appendix C.1. In page 15, how do you introduce the auxiliary variable $r^2$? Why there are two $\\\\min$ operations instead of one $\\\\min$ with jointly optimizing $h$ and $r$? Could you elaborate more on this and also how do you get rid of $r^2$ in the constraints in the second equality?\\n3. I am also a bit confused about derivations in Appendix C.2. How do you combine the $\\\\mathbb{P}$ and $\\\\gamma$ in one $\\\\min$ operation instead of $\\\\min_{\\\\mathbb{P}}\\\\min_{\\\\gamma}$? (In page 15)\\n4. In the main text, the author claimed that other objectives can be used for training implicit the model such as JS divergence. However, the theoretical analysis (theorem 1 and 2) only shows the regularization effect of Stein bridging is only for Wasserstein-1. So the analysis won't hold for other objectives like JS divergence. \\n5. It is known that Stein based divergence is a weak objective for training EBM (see Liu 2016). Therefore the regularization technique may help a lot, like the mode coverage demonstrated in the experiment. I wonder if SOTA training method for EBM is used (like SSM in Song 2020, Song 2019), does this regularization help the training, because this regularization is not cheap to compute (higher than the SOTA method for EBM).\\n6. In figure 4, I cannot find DEM and EGAN in the density plot.\\n7. In table 2, it seems that the training of DEM is failing as the AUC is closed to 0.5. Any guess on why it fails? How do you pre-process the data set? Do you add different scales of noise in the images to smooth it for the EBM to learn the distribution (like the trick used in Song 2019)?\", \"summary\": \"I am quite interested in this approach. But I am a bit concerned about the theoretical analysis and the true advantage of training EBM with Stein bridging compared to the cheaper SOTA EBM method.\\n--------\\nLiu, Qiang, and Yihao Feng. \\\"Two methods for wild variational inference.\\\" arXiv preprint arXiv:1612.00081 (2016).\\n\\nSong, Yang, et al. \\\"Sliced score matching: A scalable approach to density and score estimation.\\\" Uncertainty in Artificial Intelligence. PMLR, 2020.\\n\\nSong, Yang, and Stefano Ermon. \\\"Generative modeling by estimating gradients of the data distribution.\\\" Advances in Neural Information Processing Systems. 2019.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
ULQdiUTHe3y | Collective Robustness Certificates: Exploiting Interdependence in Graph Neural Networks | [
"Jan Schuchardt",
"Aleksandar Bojchevski",
"Johannes Gasteiger",
"Stephan Günnemann"
] | In tasks like node classification, image segmentation, and named-entity recognition we have a classifier that simultaneously outputs multiple predictions (a vector of labels) based on a single input, i.e. a single graph, image, or document respectively. Existing adversarial robustness certificates consider each prediction independently and are thus overly pessimistic for such tasks. They implicitly assume that an adversary can use different perturbed inputs to attack different predictions, ignoring the fact that we have a single shared input. We propose the first collective robustness certificate which computes the number of predictions that are simultaneously guaranteed to remain stable under perturbation, i.e. cannot be attacked. We focus on Graph Neural Networks and leverage their locality property - perturbations only affect the predictions in a close neighborhood - to fuse multiple single-node certificates into a drastically stronger collective certificate. For example, on the Citeseer dataset our collective certificate for node classification increases the average number of certifiable feature perturbations from $7$ to $351$.
| [
"Robustness certificates",
"Adversarial robustness",
"Graph neural networks"
] | Accept (Poster) | https://openreview.net/pdf?id=ULQdiUTHe3y | https://openreview.net/forum?id=ULQdiUTHe3y | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"Y0QD2cUcrR_",
"wHQEEyOf0e7",
"sr4wM_mrkqR",
"CYLsAx5t9wd",
"I36iuweguqe",
"Lft3TtcKDb7",
"Z8FbJxzBmOj",
"c5YLnE9omq",
"lqtzTmNj6Ms",
"OQdINjSLlrE"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040482876,
1605710716917,
1605710234261,
1605709952732,
1605709906838,
1605709845156,
1603937220322,
1603883070560,
1603803672445,
1603802980350
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3522/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3522/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3522/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3522/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3522/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3522/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3522/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3522/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3522/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Poster)\", \"comment\": \"This paper considers a new setting of robustness, where multiple predictions are simultaneously made based on a single input. Different from existing robustness certificates which independently consider perturbation of each prediction, the authors propose collective robustness certificate that computes the number of predictions which are simultaneously guaranteed to remain stable under perturbation. This yields more optimistic results. Most reviewers think this is a very interesting work and the authors present an effective method to combine individual certificate. The experimental results are convincing. I recommend accept.\"}",
"{\"title\": \"Summary / Changelog\", \"comment\": \"This post serves as a summary of updates since the initial submission of our manuscript.\", \"we_have_replied_to_all_reviewers_and_made_the_following_changes_in_response_to_their_comments\": [\"Reviewer 1:\", \"Add overview over different heuristic defenses for collective tasks\", \"Add experiments on WebKB and Reuters dataset (Appendix A)\", \"Reviewer 3:\", \"Correct typos in Eq.2 and Eq.7\"], \"we_have_further_made_the_following_minor_changes\": [\"correct caption of Fig. 8 (Cora, Citeseer, Pubmed -> WebKB, Reuters)\", \"remove unnecessary \\\"through\\\" at end of second paragraph\", \"correct indentation in Algorithm 2\", \"correct arXiv references (bibliography style does not support eprint field)\", \"increase x-lim and y-lim on Fig. 3\", \"correct indexing in Eq. 75 and Eq. 76 (h_n instead of h)\"]}",
"{\"title\": \"Response - Reviewer 1\", \"comment\": \"Thank you for your review!\\n\\nBefore responding to your questions, let us briefly summarize the mentioned paper ([1]) for other readers. The paper deals with the adversarial robustness of pairwise associative Markov networks (AMN), a type of probabilistic graphical model for node classification. The authors propose a robust loss function that maximizes the margin between the likelihood of the ground-truth labels and that of all other possible labels under adversarial perturbation.\\n\\n### Responses\\n**Is there a reason why [1] was not discussed in the paper?** \\nThe robust loss function in [1] is not a robustness certificate. While it compares favorably to standard training under different adversarial attacks, it does not provide any provable guarantees. In our original submission we simply refer to a survey paper of non-provable defenses that have subsequently been broken by novel adversarial attacks. Based on your review we have updated our manuscript to include an overview of different adversarial defenses (including [1]) , which will hopefully allow for a better differentiation between robustness certificates and defenses that do not provide provable guarantees.\\n\\n**How does the proposed method apply to robust Associative Markov Networks (AMN) in [1]?** \\nThe proposed method can in principle be applied to AMNs. All that is needed is some base certification procedure that can guarantee the robustness of individual predictions to adversarial attacks. The many base certificates can then be combined into a collective certificate using our method.\\n\\n**Did you try running the proposed method on WebKB and Reuters which are present in this work [1]?** \\nBased on your comment, we have run additional experiments on graphs constructed from the WebKB and Reuters corpora (see Fig. 8). Since we could not find an official reference implementation, the constructed graphs might be slightly different from those in [1] due to random sampling, tie-breaking among nodes with equal cosine similarity, etc.\\nFurther note that the multiple classes are not merged into two super-classes. Repeating the experiment with binary class labels as done in [1] yielded results even slightly better than the multi-class results shown on Fig. 8.\\n\\n**How does the linear relaxation in this work differ from the one provided in [1]?** \\nThis work and [1] propose two different mixed-integer programs, one minimizing the number of robust predictions, the other one minimizing a likelihood margin. In both cases the integer variables are relaxed to real-valued variables. Furthermore both mixed-integer programs involve boolean logic that is expressed using linear constraints. Quoting section 3.2 of [1], these are \\u201cstandard techniques\\u201d and not the core contribution of either paper.\\n\\n**How would the proposed method compare against robust AMN [1]?** \\nAs discussed in the response to the first question, the proposed method is a robustness certificate that provides provable guarantees while the method from [1] is not.\\nWhile robustness certificates can be evaluated based on their certified ratio, evaluating other defenses requires powerful adversarial attacks that are specifically adapted to break them.\\n\\n**Can the findings be used to come up with robust methods outside classification, like segmentation and scene understanding?** \\nThe proposed method can in principle be applied to any task in which many labels are predicted for a single shared input, including segmentation and scene understanding. The performance gain over the naive certificate will depend on the degree of locality of the classifier architecture.\\n\\n\\n\\n[1] Kai Zhou, Yevgeniy Vorobeychik. Robust Collective Classification against Structural Attacks. UAI 2020.\"}",
"{\"title\": \"Response - Reviewer 2\", \"comment\": \"Thank you for your review!\\n\\nWe are pleased to hear that you did not find any notable weaknesses and that you are convinced by the overall quality of the paper\\u2019s writing and content.\"}",
"{\"title\": \"Response - Reviewer 3\", \"comment\": \"Thank you for your review!\\n\\nWe are glad to know that you found the paper well-written, the work well-motivated and the results convincing. \\nWe have corrected the two typos you pointed out in the updated version of the manuscript.\"}",
"{\"title\": \"Response - Reviewer 4\", \"comment\": \"Thank you for your review!\\n\\n**Concerning 1.):** \\nAs you correctly pointed out, the locality assumption is made transparent to the reader -- the manuscript includes an entire section dedicated to discussing the limitations. Nonetheless, popular GNN architectures satisfy locality in practice, which results in a significant increase in the certified ratio using our method, as shown in our experiments. This limitation can be alleviated in future work but that is is out of scope for this paper.\\n\\n**Concerning 2.):** \\nWe believe that the paper is sufficiently self-contained and provides all preliminaries needed to understand the research problem. However, if you have any specific questions we would be glad to answer them and adapt the manuscript to resolve any unclarities.\"}",
"{\"title\": \"This paper proposes a new concept called \\u201ccollective robustness certificate\\u201d that computes the number of predictions which are simultaneously guaranteed to remain stable under perturbation.\", \"review\": \"This paper studies classifiers that collectively output many predictions based on a single input. Existing adversarial robustness certificates assume that an adversary can use different perturbed inputs to attack different predictions, and ignore the fact of a single shared input, thereby being overly pessimistic. This paper proposes a collective certificate that computes the number of simultaneously certifiable nodes for which the predictions can be guaranteed to be stable (not change). It is conducted basically by fusing individual certificates into a provably stronger certificate through explicitly modeling locality.\", \"pros\": \"This is the first effort that considers collective robustness certificate.\", \"cons\": \"1.\\tAs discussed in the paper, the proposed approach is designed to exploit locality. Without locality, it is equivalent to a na\\u00efve combination of base certificates that sums over perturbations in the entire graph. \\n2.\\tThe writing of the paper can be improved. The abstract seems to be unfinished. It appears to be hard to include sufficient preliminaries to clearly describe the research problem in a conference paper. It\\u2019s probably better to have a longer version as a journal paper.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}",
"{\"title\": \"Interesting paper. Well-motivated. Good results\", \"review\": \"** Summary:\\nIn the context of structured prediction, where multiple predictions are simultaneously made based on a single input, this works argue that existing robustness certificates independently operating on each node prediction end up with overly pessimistic results. Rather than that, this work advocates to collectively certify the overall accuracy using a single perturbed graph at a time. Starting from the building-blocks of base certificates, the authors formulate a global optimization problem, which is made tractable via a number of relaxation steps resulting in a final mixed-integer linear programming (MILP). Experimental results demonstrate clear advantage of the proposed certificates over base ones, with reasonable computational overheads coming from solving the MILP.\\n\\nThe paper is well-motivated, well-written and easy to follow. I think this is a valid method to assess robustness of classifier satisfying locality like GCN.\\n\\n** Strength:\\n - This work is well-motivated. The arguments are valid on the limitations of independent based certificates for collective tasks. Experimental results convincingly show how such certificates are pessimistic in the addressed context.\\n - The writing is clear and well-structured, easy to understand and follow. \\n - Nice discussion on the limitations\\n \\n** Limitations:\\n - Typos:\\n \\t+ Eqn. (2): $f_n(\\\\boldsymbol {X}^{'}, \\\\boldsymbol{A}^{'}) = f_n(\\\\boldsymbol{X}^{''}, \\\\boldsymbol{A}^{ \\\\textcolor{red}{''}})$\\n\\n \\t+ Eqn. (7): $\\\\boldsymbol {X}^{''}_{i,d} = \\\\psi_i^{(n)}\\\\boldsymbol {\\\\textcolor{red}{X}}^{'}_{i,d} + (1-\\\\psi_i^{(n)})\\\\boldsymbol {\\\\textcolor{red}{X}}_{i,d}$\\n \\n \\n\\n** Justification of rating: overall this is an interesting paper. The motivation, arguments and results are convincing.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"This paper first proposes a collective robustness certificate by fusing individual certificates into a provably stronger one, which significantly outperforms existing adversial certificates. Thus, I vote for accepantance.\", \"review\": \"This paper addresses the limitation of the existing adversarial robustness certificates that ignores that a single shared input is present, and thus assumes an adversary can use different perturbed inputs to attack different predictions. A novel collective certificate fusing single certificates into a stronger one, is proposed by explicitly modeling local structure of input data using graph convolution node classifiers. In terms of certified ratio, the collective certificate significantly improve the results compared with existing individual certificates.\\n\\n-quality: the technical quality is sound.\\n\\n-clarity: the input data, problem formulation and method are clearly described.\\n\\n-originality & significance: it is the first attempt in considering collective robust certificates (CRCs) by fusing individual adversarial certificate. As shown in the experiments, the certified ratio of the CRC is significantly improved over existing adversarial one. I think the collective robust certificate has some impacts for robust graph node classifications.\", \"pros\": \"The paper is well motivated. The problem and the method are both clearly presented. The improvements of the collective robust certificate over the existing ones is sufficiently high in terms of certified ratios.\", \"cons\": \"I do not find any notable weaknesses.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}",
"{\"title\": \"great work but some comparisons could be missing.\", \"review\": \"Summary\\n-------------\\nCurrent methods on adversarial robustness certificates consider data points independently which are highly pessimistic for structured data. This work proposes the first collective robustness certificate that considers the structure of the graph by modeling locality in order to derive stronger guarantees that the predictions remain stable under perturbations.\\n\\nThis work focuses on Graph Neural Networks comparing between a Naive collective certificate (baseline) and a proposed collective certificate that combines single-node certificates effectively. \\n\\nThe experiments compares these two methods against certified ratio vs. attribute and edge perturbations on the datasets Cora-ML, Citeseer, and PubMed.\\n\\nPros\\n------\\n- The paper is well-written and easy to follow.\\n- The paper tries to address a very common problem of adversarial attacks where data points are structured. Although it is a common problem, it was not explored with respect to collective robustness certificates before this work.\\n- The paper shows a novel, effective way of combining individual certificates by incorporating locality.\\n- The paper presents an LP-relaxation method that allows us to solve the certificate fast for large graphs where mixed-integer problems are prohibitively costly.\\n- The paper shows strong theory and experiments to illustrate the efficacy of the proposed collective certificate.\\n- The experiments show run-time and uncertainty measures with multiple runs for statistical significance.\\n\\nQuestions\\n--------------\\n- Is there a reason why [1] was not discussed in the paper? it is highly relevant as it also studies adversarial robustness for structural attacks. \\n- How does the proposed method apply to robust Associative Markov Networks (AMN) in [1]?\\n- Did you try running the proposed method on WebKB and Reuters which are present in this work [1]?\\n- How does the linear relaxation in this work differ from the one provided in [1]?\\n- How would the proposed method compare against robust AMN [1]?\\n- Can the findings be used to come up with robust methods outside classification, like segmentation and scene understanding?\\n\\nIn summary, I like the novelty of this method and the through experiments that were conducted that illustrate the efficacy of the proposed collective certificate, thus I recommend an accept.\\n\\n[1] Kai Zhou, Yevgeniy Vorobeychik. Robust Collective Classification against Structural Attacks. UAI 2020. \\n\\n------- Post rebuttal\\nI am satisfied with most of the rebuttal the authors have provided, and I have raised my score to an 8.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}"
]
} |
7IElVSrNm54 | Zero-shot Fairness with Invisible Demographics | [
"Thomas Kehrenberg",
"Viktoriia Sharmanska",
"Myles Scott Bartlett",
"Novi Quadrianto"
] | In a statistical notion of algorithmic fairness, we partition individuals into groups based on some key demographic factors such as race and gender, and require that some statistics of a classifier be approximately equalized across those groups. Current approaches require complete annotations for demographic factors, or focus on an abstract worst-off group rather than demographic groups. In this paper, we consider the setting where the demographic factors are only partially available. For example, we have training examples for white-skinned and dark-skinned males, and white-skinned females, but we have zero examples for dark-skinned females. We could also have zero examples for females regardless of their skin colors. Without additional knowledge, it is impossible to directly control the discrepancy of the classifier's statistics for those invisible groups. We develop a disentanglement algorithm that splits a representation of data into a component that captures the demographic factors and another component that is invariant to them based on a context dataset. The context dataset is much like the deployment dataset, it is unlabeled but it contains individuals from all demographics including the invisible. We cluster the context set, equalize the cluster size to form a "perfect batch", and use it as a supervision signal for the disentanglement. We propose a new discriminator loss based on a learnable attention mechanism to distinguish a perfect batch from a non-perfect one. We evaluate our approach on standard classification benchmarks and show that it is indeed possible to protect invisible demographics. | [
"fairness",
"missing data",
"adversary",
"classification",
"disentanglement"
] | Reject | https://openreview.net/pdf?id=7IElVSrNm54 | https://openreview.net/forum?id=7IElVSrNm54 | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"BzbN8qReO-",
"mS_4BZlGl5y",
"ulj8lebEWJF",
"0oE3uEFYWGV",
"cXXGTBbh_U0",
"g_80V50W0qA",
"A78XEV5J7WM",
"2mc2Ye6j8Al",
"5zdHJslW8GH",
"cHnzSo244NH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040353197,
1606305281133,
1606305225995,
1606305019568,
1606304441434,
1606304228873,
1603888941141,
1603836001525,
1603824572914,
1603776010185
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3518/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3518/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3518/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3518/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3518/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3518/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3518/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3518/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3518/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"The paper studies the problem of satisfying group-based fairness constraints in the situation where some demographics are not available in the training dataset. The paper proposes to disentangle the predictions from the demographic groups using adversarial distribution-matching on a \\\"perfect batch\\\" generated by a clustered context set.\", \"pros\": [\"The problem of satisfying statistical notions of fairness under \\\"invisible demographics\\\" is a new and well-motivated problem.\", \"Creative use of recent works such as DeepSets and GANs applied to the fairness problem.\"], \"cons\": [\"Makes a strong assumption that the clustering of the context set will result in a partitioning that has information about the demographics. This requires at the very least a well-behaved embedding of the data w.r.t. the demographic groups, and a well-tuned clustering algorithm (where optimal tuning is difficult in practice on unsupervised problems) -- but at any rate, as presented, the requirements for a \\\"perfect batch\\\" is neither clear nor formalized.\", \"Lack of theoretical guarantees.\", \"Various concerns in the experimental results (i.e. proposed method does not clearly outperform other baselines, high variance in experimental results, and other clarifications).\", \"Overall, the reviewers agreed the studied problem is new, interesting and relevant to algorithmic fairness; however, there were numerous concerns (see above) which were key reasons for rejection.\"]}",
"{\"title\": \"Summary of Changes\", \"comment\": \"We have updated our manuscript with five principal changes:\\n\\n1. We have added results for a 3-digit-3-color variant of Colored MNIST, under the partial-outcome setting, to the main text (Table 2), noting that our method (ZSF) outperforms the baselines by a significant margin with respect to both accuracy and all fairness metrics. We visualize the invariant representations in the appendix. Since, in this case, S and Y are both no longer binary, we generalize the fairness metrics applied to the binary S/Y datasets in two ways: \\n 1. We compute the mean of the pairwise AR/TPR/TNR ratios. In the appendix, we additionally report the minimum (i.e. farthest away from 1) of the pairwise ratios (min. ratio) as well as the largest difference between the raw values (max. diff).\\n 2. We compute the Hirschfeld-Gebelein-Renyi (HGR) maximal correlation between S and $\\\\hat{Y}$, serving as a measure of dependence defined between two variables with arbitrary support.\\n2. The means and standard deviations for all results are now computed over 30 random seeds (note that it's not the standard error, but the standard deviation.)\\n3. Experiments for the Adult Income dataset have been redone using improved hyperparameters and corrected evaluation protocol, the error being that the weighted-sampling described in Section 2.1 had not been used for training of the classifier for either our method or the baselines. The results now also include a ZSF-with-ground-truth-balancing baseline that Reviewer 1 noted was previously missing.\\n4. We have updated Appendix C to include a table of the full set of hyperparameters used for the clustering and distribution-matching phases of the algorithm for both Colored MNIST and the Adult Income dataset, as well as an explanation of how both these hyperparameters and those of the baselines (including FWD) were determined.\\n5. A short discussion of the current limitations of the work, including some caveats about when it is appropriate to use our method, and about algorithmic fairness in general.\"}",
"{\"title\": \"Our response\", \"comment\": \"1. **The implementation detail in the experiments section is severely lacking, including description of hyperparameters/validation methods and implementation details for the comparison to Hashimoto et al.**\\n\\n We apologize for these omissions and have since incorporated them into the Appendix C. Table 5 contains a full specification of the hyperparameters used in the training of our ZSF model and details regarding the baselines are described textually. We will also provide the reviewers with a link to an anonymous GitHub repository containing our code and the scripts needed to reproduce the experiments in the paper.\\n\\n2. **Answer some of the notation questions.**\\n\\n Thank you for pointing the notational inconsistencies out in Section 2; we have amended the notation according to your feedback. To answer the questions raised about this:\\n\\n - $Sup(\\\\mathcal{Y}^tr)$ was intended to denote all values of the class label, y, present in the labelled training set.\\n - By \\\"we wish to use $Sup(\\\\mathcal{S}^{ctx} \\\\times \\\\mathcal{Y}^{ctx}) \\\\ Sup(\\\\mathcal{S}^{tr} \\\\times \\\\mathcal{Y}^{tr})$ as the training signal for the encoder\\\" we mean that since the discriminator can determine the origin of a batch of ($z_y$) embeddings (whether it came form a sample from the training or context set) by inferring its support over $S \\\\times Y$ (with both dataset containing all possible values of Y), to succeed in the minimax game, the encoder must learn to properly partition the s-related and s-unrelated information into $z_s$ and $z_y$ respectively. For instance, if the discriminator can determine that a batch contains purple 4s, when there are none in the training set, then it can safely conclude that the batch in question is from the context set (and vice-versa) and the encoder should take action to avoid this by removing color-information from $z_y$.\\n - By |S| we wished to denote the cardinality of the set of possible s-labels in the training set, and so the full statement should be interpreted as \\\"whenever we have more than a single demographic (defined by S) in our labelled dataset\\\". We have replaced this notation with dim() for the sake of clarity.\\n - The $z_i$ in $c_i = C(z_i)$ is distinct from the z mentioned in the disentanglement step - while for both this one and the preliminary clustering step, an autoencoder is used for learning an embedding, in the latter case there is no splitting of it into $z_y$ and $z_s$. The aforementioned equation simply means for each data-point, $x_i$, we encode it using an autoencoder (which is not shared between steps) before feeding it to the clusterer C to produce a cluster assignment $c_i$.\\n\\n3. **Questions and clarifications**\\n\\n 3.1 **Why is there no comparison to ZSF+bal. (ground truth) on the Adult dataset?**\\n\\n We are aware that our results for the Adult Income dataset were lacking in the initial version of the manuscript. These results have since been redone and have been incorporated into the updated version and now correctly include the ZSF-with-ground-truth balancing baseline.\\n\\n 3.2 **Can the authors clarify what the ZSF alone baseline is doing in the experiments section? It\\u2019s not written super clearly in the text. Does ZSF alone simply replace the perfect set in Figure 2 with the context set?**\\n\\n Yes, ZSF alone simply replaces the perfect set with the context set.\"}",
"{\"title\": \"Our response\", \"comment\": \"1. **Suspicious experimental results due to high variance of the fairness metrics**\\n\\n Prior work by Agrawal et al (2020) has pointed out that group-fairness metrics incur higher variance compared with accuracy due to stochasticity in the train-test splits and optimization process. In the case of Colored MNIST, the high variance can also be chalked up to the small size of the labelled dataset (60% of 10% of the total MNIST training data) following subsampling and its division into context and training sets. Our results on the Adult dataset do not show such a high variance. A different splitting procedure might ameliorate some of this.\\n\\n [1] Agrawal A, Pfisterer F, Bischl B, Chen J, Sood S, Shah S, Buet-Golfouse F, Mateen BA, Vollmer S. Debiasing classifiers: is reality at variance with expectation?. arXiv preprint arXiv:2011.02407. 2020 Nov 4.\\n\\n2. **How realistic is a part of the target label and sensitive attributes are missing (our learning with partial outcomes scenario)** \\n\\n When analyzing a train set of the Adult Income dataset, one of the most common datasets for fairness analysis, we found that it: \\n\\n Has 0 samples of native-country_Holand-Netherlands and Income >50K\\n Has 0 samples of native-country_Outlying-US(Guam-USVI-etc) and income >50K\\n Has 0 samples with Income >50K of black females at the age of 40 (in contrast, there are 44 samples of white females, and 202 white males, 7 black males, with Income >50K)\\n\\n These exemplify our setting with partial outcomes, with the sensitive attributes related to native country, race, age and gender. \\n\\n3. **Lacks of comparison with N. Kallus et al. [Residual Unfairness in Fair Machine Learning from Prejudiced Data. In ICML'18], A. Coston et al. [Fair Transfer Learning with Missing Protected Attributes. In AIES'19], Creager et al. [Flexibly Fair Representation Learning by Disentanglement. In ICML'19].**\\n\\n Thank you for the comments. We agree that it would be advantageous to place into the right perspective the contributions and relations to those previous works. In residual unfairness/selective label, that has been highlighted, the problem comes from the fact that there is a difference between the decision taken place in real life, and the predictions that the machine learning system is trained to perform. For example: In the bank loan application scenario, the decision is whether or not to give a loan, where as the ML prediction is whether or not the applicant will pay back the loan. Importantly, the ML is trained on historic data of the applicants that did/did not pay back the loan, meaning they have got a loan in the first place. So this means, that if a person has never got a loan, the associated prediction in ML will most likely be \\u2018not able to pay back the loan\\u2019, as the only people who can pay back the loan are those that got a loan in the first place.\\u00a0This is in corresponded with our setting of learning with partial outcomes, where we only observe one-sided decisions w.r.t. certain protected characteristics - if the persons with certain protected characteristics has never got a loan / were always rejected, the ML system will ignore positive outcomes for those individuals.\\n\\n As regards Creager et al., 2019, the pre-existing MIM baseline does closely-resemble the FFVAE model proposed therein, with the key distinctions being\\n\\n 1. we do not apply a disentanglement loss to the subspace associated with the protected attribute. Since we only have a single protected attribute in our setups, enforcing disentanglement between the different factors of z_s is irrelevant (calibrating the fairness of predictions by composition of subspaces, each associated with a different sensitive attribute, being the focus of Creager et al., 2019);\\n 2. an adversary is used to expel information related to s from z_y; Creager et al., 2019 takes the opposite approach of having a classifier predict s from z_s\\n\\n An abbreviated discussion of this kind has been appended to the explanation of the MIM baseline given in the main text. While the FFVAE model may not be entirely suitable, we do think that a baseline which encourages disentanglement of the entire latent space (a property shown to implicitly promote fairness when sensitive attributes are unobserved; Locatello et al., 2019), rather than just over a subset of it, would absolutely be worth having.\\n\\n [1] Locatello F, Abbati G, Rainforth T, Bauer S, Sch\\u00f6lkopf B, Bachem O. On the fairness of disentangled representations. In Advances in Neural Information Processing Systems 2019 (pp. 14611-14624).\"}",
"{\"title\": \"Our response\", \"comment\": \"1. **There doesn't seem to be a single algorithm that has a clear better performance compared to the baselines. E.g., for colored-MNIST ZSF seems to work a bit better, but for Adult Income MIM+bal, and FWD seem to work better.**\\n\\n Experiments for the Adult Income dataset have been redone using improved hyperparameters and corrected evaluation protocol, the error being that the weighted-sampling described in Section 2.1 not had not been used for training of the classifier for either our method or the baselines. Please refer to Table 3 for an updated version.\\n\\n2. **What if during deployment time, no context set is available and online inference (for each incoming individual) is needed? Or, what if I have a context set, but some of the quadrants are also missing, and even worse, the missing quadrants are different from the ones missing in training?**\\n\\n Our context set is much like the deployment dataset. If a context set is not available, we should consider a transductive learning setting where the deployment set is our context set. It is well-known that an online setting is strictly harder than a batch setting. In the future version, we will work to extend our zero-shot fairness framework for a transductive online learning setting of Ben-David, Kushilevitz, and Mansour [1].\\n\\n [1] S. Ben-David, E. Kushilevitz, and Y. Mansour. Online learning versus offline learning. Machine Learning 29:45-63, 1997. \\n\\n3. **Minor comments and questions**\\n\\n 3.1 In the experiments, for colored-MNIST, a comparable portion for each quadrant is retained for the context dataset, have you tried different retained portions and how does that affect clustering quality? Have you tried some of the more extreme settings (e.g., more skewed distribution over $|S|\\\\times|Y|$) and will you still obtain reasonable clusters?\\n\\n Exploring the effect of the size of the context set on model performance is something we are keen to explore in order to test the limits of the model. As mentioned above, we are also interested in exploring the extreme case where we do not have a context set at all and must resort to transductive learning in which the distribution of the training set is matched directly to that of the test/deployment set. \\n\\n 3.2 I didn't find how the context dataset is constructed for Adult Income, could you provide more information on this?\\n\\n For the Adult Income dataset, the context set is simply a regular subset of the data, which unlike Colored MNIST, is naturally biased with respect to the protected attribute, Gender; we have updated the manuscript (Appendix B.2, specifically) to include this detail.\\n\\n 3.3 How are $\\\\lambda_1$ and $\\\\lambda_2$ chosen in the experiments?\\n\\n As detailed in the Table 5 of the Appendix, $\\\\lambda_1$ is 10^-2 and $\\\\lambda_2$ is 10^-3 on ColorMNIST; $\\\\lambda_1$ is 0 and $\\\\lambda_2$ is 10^-2 on Adult. We also elaborate on the hyper-parameter tuning in the section C of the Appendix.\\n\\n 3.4 Some of the error bars in table 1&2 are rather large, could the authors further clarify which set of the results are statistically significant? \\n\\n Please refer to our answer to **AnonReviewer4.**\"}",
"{\"title\": \"Our response\", \"comment\": \"1. **Justification for using flawed dataset with invisible demographics**\\n\\n Thank you for this important point. The problem of invisible demographics is indeed real. Selective labels problem [1], intersectional fairness [2,3], and a combination of both can easily translate to partial outcomes and missing demographics. The Adult Income dataset has 0 samples with Income >50K of black females at the age of 40 (further detailed for other intersectional groups for this dataset can be found in our comments to **AnonReviewer4)**. This dataset has been used by a number of prior studies in the fairness-aware machine learning literature such as Zemel et al. 2013, Zafar et al. 2017, Madras et al. 2018. We do agree that dataset consumers should take extra care about the cost-benefit analysis of selecting particular datasets for their machine learning tasks. Any corrective action such as fairness interventions or inaction should be recorded. We have added a section on \\\"current limitations\\\" in the revised manuscript. \\n\\n [1] H. Lakkaraju, J. Kleinberg, J. Leskovec, J. Ludwig, and S. Mullainathan. The selective labels problem: Evaluating algorithmic predictions in the presence of unobservables. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2017.\\n\\n [2] J. Buolamwini and T. Gebru. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Proceedings of the ACM Conference on Fairness, Accountability, and Transparency, 2018.\\n\\n [3] M. Kearns, S. Neel, A. Roth, and Z.S. Wu. Preventing fairness gerrymandering: Auditing and learning for subgroup fairness. In Proceedings of the International Conference on Machine Learning, 2018.\\n\\n2. **Theoretical results on algorithmic fairness with missing demographics (Blum and Stangl)**\", \"blum_and_stangl_considered_two_forms_of_data_corruptions\": \"a) under-representation of positive examples in a disadvantaged group, and b) a substantial fraction of positive examples in a disadvantaged group mislabeled as negative. Theoretical results are achieved by assuming equal base rates across groups. Blum and Stangl noted that this assumption may not be realistic in all settings. We do not assume equal base rates (perfect dataset) but we aim to construct a perfect dataset from an unlabeled context set. Ideally, our theoretical results should first bound or characterize the difference of learning with a perfect dataset and learning with an approximately perfect dataset in probabilistic terms. We can subsequently apply the union bound utilising results such as from Blum and Stangl to make a statement about recovering the Bayes Optimal Classifier. We have not managed to do so.\\n\\n3. **The explanation for how a \\\"perfect dataset\\\" is constructed is vague (section 2.2). Since the clusters are not explicitly named (i.e. no labels), how is this a \\\"perfect dataset\\\", defined as one where the labels y and group s are independent? Is there any way to check the independence?**\\n\\n Given that we are dealing with discrete variables, independence is achieved if we have equal proportions of all combinations, i.e., all combinations are equally represented (P(y,s)=P(y)P(s) \\u21d2 P(y,s)=0.25 for binary y and s). So if we manage to identify all the clusters that correspond to all the combinations of y and s, then we can sample from these clusters at an equal rate to achieve a balanced dataset in which y and s are not correlated.\\n\\n It's true that the clusters are not named, but this is not necessary for this task. To compute the clustering accuracy, we actually have to solve the linear assignment problem of cluster-source association (i.e. we need to explicitly name the clusters). As such, we \\\"name the cluster\\\" only for assessing the quality of the approximate perfect dataset.\"}",
"{\"title\": \"Suspicious experimental results and unclear merit\", \"review\": \"This paper tackles a fair classification problem with an invisible demographic, a situation where the records who have some specific target labels and sensitive attributes are missing. In this setting, the authors introduce a disentangled representation learning framework to make the resultant classifier fair by taking advantage of the additional dataset, context dataset. They demonstrate by the empirical evaluations that the proposed disentangled representation learning algorithm success to mitigate unfair bias by utilizing the perfect dataset, a dataset in which the target label and sensitive attribute are independent. Usually, the perfect dataset is unavailable; hence, they introduce a method to convert the context dataset into the perfect dataset. The authors also show that even if the context dataset is not perfect, the presented method successes to mitigate an unfair bias.\", \"the_strong_points_of_this_paper_are_as_follows\": [\"This paper introduces a potentially interesting problem, the invisible demographic.\"], \"the_weak_points_of_this_paper_are_as_follows\": [\"The experimental results have a high variance. Hence, they are weak to support the significance of the proposed algorithm.\", \"The motivation of the proposed method is unclear. Some existing methods already solve most of the crucial situations considered in this paper.\", \"This paper lacks a comparison with the important related method.\", \"Presentation is poor. I cannot follow the description of the algorithm.\", \"My recommendation is rejection. The main reason is that I have concerns about suspicious behavior in the experimental results. Also, the proposed method is not well-motivated, and its merit is unclear.\", \"I am very suspicious about the experimental results. The standard deviations for the fairness metrics shown in Table 1 and Table 2 are considerably high. Why can we believe the successful mitigation of unfair bias of the proposed method from these results? Even if I believe the reported values, due to the large standard deviation, we cannot say the authors' method outperforms the others but can only say it is competitive. I don't think this is a significant result.\", \"Parts of the invisible demographic problem are already solved. For example, a situation where records in some classes are missing is solved by utilizing semi-supervised learning techniques, e.g., Hsieh et al. Classification from Positive, Unlabeled and Biased Negative Data. In ICML'19. For a situation where the sensitive attributes are missing, there are several works, including\", \"N. Kallus et al. Residual Unfairness in Fair Machine Learning from Prejudiced Data. In ICML'18.\", \"A. Coston et al. Fair Transfer Learning with Missing Protected Attributes. In AIES'19.\", \"It is a rare situation where records with a specific combination of the target class and demographic group are missing. These existing methods already solve other cases. Therefore, it is unclear that the proposed method has merits compared to the existing ones.\"], \"there_is_a_fair_classification_method_based_on_disentangled_representation_learning\": \"- E. Creager et al. Flexibly Fair Representation Learning by Disentanglement. In ICML'19. \\nBecause this method and any fair classification methods can apply to the problem tackled by this paper, it is necessary to compare the proposed method with them. I know these methods are not designed to work in the invisible demographic situation; however, it is unclear if they do not work in the situation without empirically evaluating them. \\n\\nI cannot understand the introduced objective function in Eq. 10. What is the meaning of $f(z_y \\\\subset \\\\mathcal{X}_{perf})$ and $f(z_y \\\\subset \\\\mathcal{X}_{tr})$? While the function $f$ takes $x$ as its input, it takes a boolean value in Eq. 10. \\n\\nWhat is clustering accuracy? Its definition is missing.\\n\\n### Minor comments\\n- While I understand the situation where the whole sensitive attributes are missing, I wonder if it is a realistic situation that a part of the target label and sensitive attributes are missing. Is there a concrete dataset that invisible demographic situation occurs?\\n- I cannot make sure about the notation of $\\\\mathcal{M}_{y=1,s=0}=\\\\emptyset$. If my understanding is correct, the set $\\\\mathcal{M}$ (omit subscript $y=1,s=0$ because it doesn't work) comprise of whole data points whose target label and sensitive attribute are $y=1$ and $s=0$, respectively. It involves not only the target data points but also unobserved data points available in the world. From this perspective, $\\\\mathcal{M}=\\\\emptyset$ means that there are no people whose target label and sensitive label are 1 and 0, respectively, in the world. In this case, we cannot construct the context and deployment sets that satisfy Eq. 3 or Eq. 4.\\n- Typo on page 3, first paragraph: but in in contrast to -> but in contrast to\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Well organized paper on a relevant problem, but lacking in key experiment details.\", \"review\": \"############# Summary of contributions ##############\\n\\nThis paper introduces the problem of enforcing group-based fairness for \\u201cinvisible demographics,\\u201d which they define to be demographic categories that are not present in the training dataset. They assume access to a \\u201ccontext set,\\u201d which is an additional unlabeled dataset that does contain the invisible demographic categories of interest. They further provide an algorithm for enforcing fairness on these invisible demographics using this context set. \\n\\nSpecifically, their contributions are:\\n\\n- Algorithmic: They provide an algorithm for enforcing fairness on these invisible demographics. This algorithm involves first applying clustering methods on the context set to \\u201cbalance\\u201d it, followed by disentangled representation learning and on the \\u201cbalanced\\u201d context set.\\n\\n- Empirical: They provide experiments on two benchmark datasets (colored MNIST and Adult) comparing their proposed method to multiple baselines. \\n\\n############# Strengths ##############\\n\\n- The paper is organized well, and the problem of \\u201cinvisible demographics\\u201d is described and motivated well using concrete examples. \\n\\n- The architecture of the proposed method is documented clearly in Figure 2. \\n\\n- Their architecture builds on state of the art techniques such as DeepSets (Zaheer et al. 2017). Using DeepSets, the discriminator in their architecture estimates the probability that a given batch of samples, as a set, has been sampled from one distribution or the other. Preserving the set invariance to permutations is useful here, and different from a typical GAN discriminator.\\n\\n- The baselines in the experiment section are thorough. It\\u2019s useful to see a comparison between their clustering + balancing + disentangling method and the baseline methods of ZSF, which has balancing + disentangling but no clustering, and ZSF + bal. (ground truth), which has ground truth clusters + balancing + disentangling.\\n\\n############# Weaknesses ##############\\n\\n- The experiments section does not describe the implementation of the comparison to Hashimoto et al. 2018. Notably, the methodology of Hashimoto et al. 2018 is not specifically meant to enforce equality of acceptance rates, true positive rates, or true negative rates -- it only minimizes the worst case loss over unknown demographics. \\n\\n- The authors do not provide any description of hyperparameters tuned, or any use of a validation set for hyperparameter tuning. I could not find this in the appendix either. In fact, on page 7, they say that they \\u201crepeat the procedure with five different train/context/test splits\\u201d, which suggests no validation set. The parameters for the clustering methods are not given, and I find it hard to believe that no hyperparameters were tuned. Can the authors specifically provide the hyperparameters used, whether/how they were tuned, and any validation methods used (whether it be a validation set or cross validation)? \\n\\n- The experiments are all done with binary protected groups: purple vs. green for the colored MNIST dataset, and male vs. female for the Adult dataset. Furthermore, these groups are not hugely imbalanced in the context set to begin with. This makes the clustering task easier. It would be interesting to see experiments with protected groups with more than two categories. For example, in the Adult dataset, the race feature is highly inbalanced, with a very small proportion of examples labeled as Asian-Pac-Islander or Amer-Indian-Eskimo. It would be interesting to see how the clustering techniques compare when the context set includes more than two protected categories, there is initial strong data imbalance between those groups, and the \\u201cinvisible demographic\\u201d has relatively few data examples in the context set. This may not be entirely necessary for acceptance this round, but could be an interesting future experiment.\\n\\nThe notation is in multiple cases unclear/inconsistent, possibly due to typos. Examples listed below:\\n\\n- In the last paragraph on page 5, the notation and description of the support is confusing and not well defined. First, \\\\mathcal{S} and \\\\mathcal{Y} are themselves sets as defined in Section 2.1. Can the authors more specifically define what they mean by Sup(\\\\mathcal{Y}_tr)? Is this the set of elements from \\\\mathcal{Y} that are contained in the training set? If so, why not just notate this as \\\\mathcal{Y}_tr alone? The additional \\u201cSup\\u201d notation is confusing and appears unnecessary. Furthermore, what do the authors mean when they say, \\u201cwe wish to use Sup(\\\\mathcal{S}_{ctx} \\\\times \\\\mathcal{Y}_{ctx}) \\\\ Sup(\\\\mathcal{S}_{tr} \\\\times \\\\mathcal{Y}_{tr}) as the training signal for the encoder\\u201d?\\n\\n- [Top of page 6: \\u201cwhenever we have |S| > 1\\u201d] -- What does this notation mean? Is this the absolute value of the random variable S? This doesn\\u2019t quite make sense given that S was previously stated to be a discrete-valued protected attribute, which could be a vector with p entries. The next statement of this corresponding to the \\u201cpartial outcomes\\u201d setting is thus also unclear.\\n\\n- [Section 2.2: \\u201cc_i = C(z_i)\\u201d] -- What is z_i here? Is z_i the vector of (z_s, z_y) for the input features x_i? \\n\\n############# Recommendation ##############\\n\\nUPDATE (after author response): I appreciate the authors' response. The inclusion of the hyperparameters are helpful. I also think it's an improvement that the authors added a comparison to ZSF+bal.(ground truth) to the Adult experiment.\\n\\nI still have a question about the experimental comparison to Hashimoto et al. (called \\\"FWD\\\" in this paper). Is the version of \\\"FWD\\\" implemented in this paper using exactly the same fairness criterion as in the Hashimoto et al. paper? If so, am I correct in saying that the \\\"FWD\\\" comparison in the experiments section does not directly constrain for any of the measured AR ratio, TPR ratio, or TNR ratio? The authors should clarify this in a later version.\\n\\nOverall, I'm willing to raise my score to a 6, but still think the paper is borderline. The paper could still use some improvement in covering related work on the problem of fairness where the protected attributes are not fully known (including the references I suggested).\\n\\n------------- OLDER RECOMMENDATION BELOW -------------\\n\\nOverall, my recommendation is 5: Marginally below acceptance threshold. The paper states an interesting and practically relevant problem of enforcing fairness with \\u201cinvisible demographics.\\u201d The methodology is overall well documented, and the experimental baselines make sense. However, the implementation detail in the experiments section is severely lacking, including description of hyperparameters/validation methods and implementation details for the comparison to Hashimoto et al. If the authors provide some of these details and answer some of my notation questions, then I would be willing to raise my score.\\n\\n############# Questions and clarifications ##############\\n\\n- Why is there no comparison to ZSF+bal. (ground truth) on the Adult dataset? \\n\\n- Can the authors clarify what the ZSF alone baseline is doing in the experiments section? It\\u2019s not written super clearly in the text. Does ZSF alone simply replace the perfect set in Figure 2 with the context set?\\n\\n############# Additional feedback ##############\\n\\n- Below I\\u2019ve listed some additional related work in the setting where protected attributes are unknown. This is not factored into the review, as these settings seem different enough and some of these works are recent. \\n\\nLamy et al. Noise-tolerant fair classification. NeurIPS, 2019.\\n\\nAwasthi et al. Equalized odds postprocessing under imperfect group information. ICML, 2020.\\n\\nWang et al. Robust Optimization for Fairness with Noisy Protected Groups. arXiv:2002.09343, 2020\\n\\n- [page 3: \\u201cWe can all agree that this sounds unfair\\u201d] -- nit: this wording seems unnecessarily strong to me. Let\\u2019s not claim that \\u201cwe would all agree\\u201d on something, especially when the meaning of unfair has not yet been defined.\\n\\n- [page 5]: There appear to be multiple typos in the paragraph following equation (10), where the variables V, Q, K are not written in math mode, and are instead just capital letters in the text.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Interesting work on zero-shot fairness with partial demographics\", \"review\": [\"#Summary\", \"This paper studies zero-shot fairness where the demographic information is partially unavailable, but assuming the existence of a context dataset that contains all labels x all demographics (including the invisible). The paper proposes a disentanglement algorithm that separates information of the label and demographics, under two zero-shot settings: 1) learning with partial outcomes: both labels and both demographics are available, but for one of the demographics only negative outcome is present; 2) learning with missing demographics: one of the demographics is completely missing.\", \"#Pros\", \"Zero-shot fairness is a very important topic under many practical settings, where the demographic information can be (partially) missing due to sampling bias or privacy reasons.\", \"The two zero-shot settings presented in this paper are both very interesting, and the paper did a good job decomposing the two scenarios in the methods and experimental section.\", \"The paper is clearly presented, with careful analysis over each of the proposed component, with proper ablation studies.\", \"#Cons\", \"The biggest concern I have is the clustering part of the context set into a perfect set. This seems to be a prerequisite for the disentangle algorithm to perform well. However, there is no guarantee over the clustering quality, and this is partially reflected in the experiments (table 1 & 2) as well. For example, while ranking-based clustering achieves reasonable clustering accuracy, k-means seems to be rather bad for certain datasets (e.g., Adult Income). In addition, how does the distribution of the label x demographics on the context dataset affect clustering quality? I can imagine under extreme cases, if the distribution is very skewed (some of the label x demographic has very scarce data), then it is hard to get good clusters, which is very likely to happen in practice if the training distribution is already skewed.\", \"I think some further analysis on this is required, e.g., how the cluster quality differs w.r.t. different retained proportions of each quadrant.\", \"The experimental results seem to present different trade-offs for the proposed approaches. There doesn't seem to be a single algorithm that has a clear better performance compared to the baselines. E.g., for colored-MNIST ZSF seems to work a bit better, but for Adult Income MIM+bal, and FWD seem to work better. The performance also varies a lot across different fairness metrics as well.\", \"Although the topic of zero-shot fairness is very important, the end-to-end setting in this paper is a bit artificial. It requires two things, 1) both label $y$ and demographic $s$ are present in the training data, although some of the quadrants are allowed to be missing; 2) there exists a context set that has all quadrants available for $y$ and $s$, thus can be used for balancing and learning the disentangled representations. I wonder how realistic this setting is in practice. It is very likely that 1) is true in real-world but the requirement of 2) makes the setting a bit constrained, what if during deployment time, no context set is available and online inference (for each incoming individual) is needed? Or, what if I have a context set, but some of the quadrants are also missing, and even worse, the missing quadrants are different from the ones missing in training?\", \"#Over recommendation\", \"I think this paper studies a very interesting problem but some further analysis, e.g., how the distribution over the context data affects the results, and how to make the algorithm work reliably better in practice, is needed. Overall I think this is a borderline paper.\", \"#Minor comments and questions\", \"In the experiments, for colored-MNIST, a comparable portion for each quadrant is retained for the context dataset, have you tried different retained portions and how does that affect clustering quality? Have you tried some of the more extreme settings (e.g., more skewed distribution over |S|x|Y|) and will you still obtain reasonable clusters?\", \"I didn't find how the context dataset is constructed for Adult Income, could you provide more information on this?\", \"How are $\\\\lambda_1$ and $\\\\lambda_2$ chosen in the experiments?\", \"Some of the error bars in table 1&2 are rather large, could the authors further clarify which set of the results are statistically significant?\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Problem proposed has weak and/or problematic motivations, unsure about significance of contributions\", \"review\": [\"Summary\", \"This paper proposed a problem in algorithmic fairness where labeled examples for some demographic groups are completely missing in the training dataset and still the goal is to make predictions that satisfy parity-based fairness constraints.\", \"The method developed to solve this problem uses a \\\"context\\\" dataset with unlabeled data but containing individuals from all demographics to construct a 'perfect dataset' and 'disentangled representations'\", \"Quality\", \"This work appears to have only a superficial understanding of the field of algorithmic fairness, hence proposing a problem that in my opinion is artificial. In the case where a dataset has *zero* labeled examples for some demographic groups, this is such an extreme situation that it is a clear red flag that there is a large bias in the collection process and/or the data collection design was poorly done---what is the justification for continuing to use this dataset as is? Is there any real life scenario where one is forced to use this problematic dataset (this could be irresponsible, even unethical), instead of trying to get labels for the \\\"context set\\\" (which is assumed to be available!) or rethinking the data collection process? Clearly one ought to go back to the drawing board in this imagined worst case situation.\"], \"references\": [\"Kate Crawford. The hidden biases in big data. Harvard Business Review, 1, 2013.\", \"Kate Crawford. The trouble with bias. NIPS Keynote https://www.youtube.com/watch?v=fMym_BKWQzk, 2017.\", \"Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daum\\u00e9 III, Kate Crawford. Datasheets for Datasets\", \"The authors write: \\\"If the model relies only on the incomplete training set, it is not unreasonable to expect that the model to easily misunderstand the invisibles. We can all agree that this sounds unfair, and we would like to rectify this.\\\" without any proof or mathematical argument. \\\"this sounds unfair...\\\" is an unrigorous and uncritical statement that doesn't contribute any deeper insight, nor does it engage with existing work on what \\\"unfairness\\\" constitutes. It also does not explain why the paper's method (to massage a clearly problematic dataset) is any less unfair.\", \"The paper also does not cite some related work on algorithmic fairness with missing demographics, e.g.\"], \"recovering_from_biased_data\": [\"Can Fairness Constraints Improve Accuracy? Avrim Blum\\u2217, Kevin Stangl\\u2020\", \"I'm concerned that the paper calls the missing demographic groups \\\"the invisibles\\\" and then proceeds to still champion the use of the clearly flawed dataset. The algorithm has only intuitive justifications and so does not convince that the missing demographic groups would not be still somehow disadvantaged or find this procedure extremely unjust. The paper does not discuss any of these problematic aspects.\", \"Clarity\", \"The lack of theoretical guarantees for the algorithm makes it unclear what assumptions are needed for the algorithm to do something meaningful in 'rectifying' the extremely large missing pieces in the original training dataset.\", \"The explanation for how a \\\"perfect dataset\\\" is constructed is vague (section 2.2). Since the clusters are not explicitly named (i.e. no labels), how is this a \\\"perfect dataset\\\", defined as one where the labels y and group s are independent? Is there any way to check the independence?\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
1Kxxduqpd3E | Rotograd: Dynamic Gradient Homogenization for Multitask Learning | [
"Adrián Javaloy",
"Isabel Valera"
] | GradNorm (Chen et al., 2018) is a broadly used gradient-based approach for training multitask networks, where different tasks share, and thus compete during learning, for the network parameters. GradNorm eases the fitting of all individual tasks by dynamically equalizing the contribution of each task to the overall gradient magnitude. However, it does not prevent the individual tasks’ gradients from conflicting, i.e., pointing towards opposite directions, and thus resulting in a poor multitask performance. In this work we propose Rotograd, an extension to GradNorm that addresses this problem by dynamically homogenizing not only the gradient magnitudes but also their directions across tasks. For this purpose,Rotograd adds a layer of task-specific rotation matrices that aligns all the task gradients. Importantly, we then analyze Rotograd (and its predecessor) through the lens of game theory, providing theoretical guarantees on the algorithm stability and convergence. Finally, our experiments on several real-world datasets and network architectures show that Rotograd outperforms previous approaches for multitask learning.
| [
"multitask learning",
"deep learning",
"gradnorm"
] | Reject | https://openreview.net/pdf?id=1Kxxduqpd3E | https://openreview.net/forum?id=1Kxxduqpd3E | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"frgNg4eJunf",
"9hbSopefdZt",
"ICZTcYck4nV",
"FLsccGCB-1N",
"IYvAkj55pwL",
"l6DdqWZLzLf",
"UGj7g5UdbCr",
"ZqxUamCQxpR",
"8xR_2doyjng",
"Q8wnabSju2z"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040379819,
1606304319701,
1605629763692,
1605629640827,
1605629379267,
1605628860004,
1605628815553,
1603949530538,
1603943419731,
1603910979334
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3517/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3517/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3517/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3517/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3517/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3517/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3517/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3517/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3517/AnonReviewer4"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": [\"The paper is proposing a novel representation of the GradNorm. GradNorm is presented as a Stackelberg game and its theory is used to understand and improve the convergence of the GradNorm. Moreover, in addition to the magnitude normalization, a direction normalization objective is added to the leader and a rotation matrix and a translation is used for this alignment. The paper is reviewed by three knowledgable reviewers and they unanimously agree on the rejection. Here are the major issues raised by the reviewers and the are chair:\", \"The motivation behind the rotation matrix layers is not clear. It should be motivated in more detail and explained better with additional illustrations and analyses.\", \"Empirical study is weak. More state of the art approaches from MTL should be included and more realistic datasets should be included.\", \"The proposed method is not properly explained with respect to existing methods. There are MTL methods beyond GradNorm like PCGrad and MGDA (MTL as MOO). These methods also fix directions. Hence, it is not clear what is the relationship of the proposed method with these ones.\", \"I strongly recommend authors to improve their paper by fixing these major issues and submit to the next venue.\"]}",
"{\"title\": \"New revision of the paper\", \"comment\": [\"We would like to let the reviewers know that a new version of the paper has been uploaded, implementing the main changes and comments requested by the reviewers. The main changes of the new manuscript are the following:\", \"Typos and wrong use of em dashes have been correct and, in general, the text has been polished to improve readability.\", \"Proposition 4.1 has been replaced by a paragraph explaining what are the implications that this proposition have in supporting the application of GradNorm over the shared representation. Prop 4.1 is now in the appendix and has been further improved, only requiring the gradient matrix to have a left-inverse instead of being invertible.\", \"Additional comments have been added regarding how to simulate the oracles and how to obtain Rotograd's approximated objectives from its objective function.\", \"Experiments now include three ablation studies:\", \"First, we study the effect that the leader's learning speed has on the training of the MNIST experiments, showing that a slower learner benefits the overall results (Table 1).\", \"Second, the special case where the parameters of Rotograd are updated directly using the closed-form solutions is considered. Showing (Table 1) that the training becomes highly unstable, obtaining poor results. This further support the use of iterative updates.\", \"Third, using the same experimental setup we study how the model capacity influences negative transfer. Specifically, we show (Table 2) that the effect of negative transfer becomes more noticeable as we restrict the model capacity, since task cooperation is then required to further improve their results.\", \"A more extensive description of the considered methods (e.g. rotograd-sgd) is given, making clearer the differences across methods.\", \"Results regarding the previous experiments on MNIST and Chest have been updated according to a better tuning of the leader's learning speed.\", \"Figure 4 of the appendix have been updated, now including all the different methods that were used in the main paper, rather than having only uniform and Rotograd.\", \"We would like to thanks once again the reviewers for their useful feedback and we hope these changes are well received.\"]}",
"{\"title\": \"Response to Reviewer 4\", \"comment\": \"We thank the reviewer for a detailed review that will help us to significantly improve the paper presentation. We will carefully revisit the manuscript to correct existing typos and improve the paper readability. We have posted a general response where we address points raised by several reviewers, including the ablation study, the \\\"leakage\\\" question, and a clarification regarding rotograd-sgd. Please refer to such a common response for details on these points. In addition, we below provide specific answers to the rest of your questions. \\n\\n---\\n**Proposition 4.1**\\nWe recall that the original motivation of GradNorm is to equalize the magnitude contributions of the individual tasks to the gradients w.r.t. all the shared parameters $\\\\Theta$. However, for computational efficiency, the authors restrict the GradNorm solution to only a subset of the parameters (corresponding in their experiments to one layer in the shared NN). In contrast, when working on the shared representation Z, we can derive a bound on the norm of the gradients w.r.t all the shared $\\\\Theta$ for the individual tasks (refer to Proposition 4.1), and thus, as desired, apply GradNorm to the overall network by working on Z. Of course, by working on Z, we cannot make exactly equal all the gradient magnitudes but instead force them to lay in a target interval. We will clarify this point in the final manuscript.\\n\\n---\\n**Size of the architectures**\\nNegative transfer may occur especially in mid- and small-size architectures, as the individual tasks are \\\"forced\\\" to cooperate (but also to compete for shared resources). Thus, in our experiments, we consider reduced architectures (still with comparable accuracy compared to the original architecture) to avoid scenarios where, due to the high number of parameters, the backbone can fit all tasks without requiring positive transfer. We would like to emphasize that, as the size of Z increases, the more likely that the gradients across tasks become orthogonal, i.e., that different tasks use disjoint subsets of the shared intermediate representation Z.\\n\\n---\\n**Rotograd on classification tasks**\\nWe agree with the reviewer that in MNIST rotograd performs slightly worse for the classifications tasks, although significantly better for the other tasks, than the uniform approach. However, such a difference in the classification tasks decreases when a more thorough hyperparameter optimization for all the methods is performed, as shown in the following table (which contains a summary of the new results that will be replacing Table 1 in the paper):\\n\\n| Method | Left \\ud83e\\udc51 | Right \\ud83e\\udc51 | Sum \\ud83e\\udc53 | Multiply \\ud83e\\udc53 | Density \\ud83e\\udc53 |\\u0394 \\ud83e\\udc51 |\\n| :-------------- | :-----------: | :-----------: | :---------: | :-----------: | :---------: | :----------: |\\n| single task | **93.50 (00.47)** | **90.65 (00.46)** | 6.44 (4.63) | 159.08 (6.16) | 1.62 (1.78) | |\\n| uniform | 90.15 (00.53) | 86.65 (00.41) | 5.14 (0.33) | 149.21 (5.97) | 0.51 (0.02) | 0.06 (0.11) |\\n| rotograd | 89.01 (00.87) | 84.62 (01.19) | **4.54 (0.19)** | **134.95 (5.92)** | **0.23 (0.04)** | **0.18 (0.06)** |\\n\\nThe above results correspond to a learning rate of 0.02 for the leader and an exponential decay of 0.99 per iteration. \\n \\nWe believe that the slight deterioration in classification accuracy is due to the limited capacity of the NN, which trades-off the performance across all tasks. This can be explained by per-task learning dynamics shown in Fig. 4(a) of Appendix A3, where we see that uniform aggressively optimizes both classification tasks, the other tasks are learned at a lower pace. A similar behavior is observed when looking at the cosine similarities for the individual tasks in Fig. 4(b), where we can observe that while the gradient of the classification tasks is well aligned with the overall gradient evaluation (cosine similarity approx. 0.5), this is not the case for the rest of tasks (being, e.g., the cosine similarity of the density below 0.1). In contrast, rotograd forces a similar cosine similarity for all tasks, and thus eases that all tasks are learned at a similar pace (although more slowly for the classification compared to uniform). In order words, rotograd makes the cosine similarity comparable across all tasks (which means worsening the classification performance), whereas in uniform the density task is completely orthogonal to the others.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"**Clarity of the paper**\\nAs suggested by the reviewer, we will carefully review the writing of the paper to improve its readability and to make it more self-contained by providing all the necessary details on both the motivation and technical details of our approach. \\n\\n---\\n**Proposition 4.1 and the role of Z**\\nThe point of Prop. 4.1 is that we can bound the norm of the gradient w.r.t. the shared parameters (the original goal of GradNorm) by equalizing the norm of the gradient w.r.t the shared representation Z. Such bound is given in Eq. 5, which depends on the inverse of the gradient matrix. The gradient matrix is of the size of the number of parameters times the size of the shared representation Z, and thus in general case is not invertible (as it is not squared). Fortunately, our theoretical results still hold when the gradient matrix is left-invertible, which does not require a squared matrix as it only requires that the rank for the gradient coincides with the dimensionality of Z (which is in general significantly smaller than the number of parameters). Moreover, we point out that we do not need to restrict ourselves to a particular norm, as all norms are equivalent in finite-dimensional spaces and thus do not change the validity of our results. We will clarify this in the revised version of the paper. \\n\\n---\\n**Oracle formulation (Eq. 8) and Leader objective (Eq. 9)**\\nThere are two types of oracle functions in our formulation, which predict the next evaluation point of respectively the shared and the individual tasks' representation and are approximated in Eq. 8. These approximations correspond to a step of gradient descent (as detailed before Eq. 8), and are necessary in order to be able to solve the leader objective, i.e., to find the optimal transformation of the shared representation into the individual tasks' representation in the next iteration of the learning algorithm. We will clarify this in the revised version of the paper. \\n\\nWhen considering the oracle approximation in Eq. 8, the leader objective in Eq. 6 readily split into the two objectives in Eq. 9, plus a residual term that tends to zero when the two individual objectives in Eq. 9 are solved with zero error (refer to Eq. 18 in appendix A2 for the exact relationship). When Eq. 9 cannot be perfectly solved, the solution of Eq. 9 approximates (up to a mismatch) the solution of Eq. 6. We refer the reviewer to appendix A2 for further details. \\n\\n--- \\n**GradNorm formulation**\\nWe do not see any difference between Eq. 3 in our paper and Eq. 2 in the GradNorm paper*, except for the adaptation to our notation and for the fact that in Eq. 3 we only consider one particular task (i.e., we have removed the summatory over tasks in Eq. 2 of GradNorm).\\n\\n*https://arxiv.org/pdf/1711.02257.pdf\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"First of all, we would like to thank the reviewer for such an accurate summary of our work. We are delighted to observe interest in this novel formulation based on Stackelberg games, which we believe can shed some light on the dynamics of MTL methods such as GradNorm and Rotograd.\\n\\n---\\n**2nd contribution - Rotograd**\\nUp to the best of our knowledge, the closest approach to ours is \\\"d-grad via architecture\\\", which implements using a NN-architecture an affine transformation of the output of every hidden layer in the multitask network with the aim of avoiding conflicting gradients. However, d-grad suffers from two major limitations in comparison to rotograd. First, in its formulation there is not an explicit objective on the gradient alignment, and thus, it does not provide any theoretical guarantees. As a consequence, it is not clear if the better performance shown in the empirical evaluation is due to avoiding conflicting gradients or to the additional expressiveness of the model (as there is an additional affine transformation per layer). Second, d-grad does not impose restrictions on the affine transformation NNs (or equivalently, on the individual gradient magnitudes), which may still result in a negative transfer between tasks due to the disparities in the individual gradient magnitudes (problem addressed by Gradnorm and our extension, i.e., rotograd). We will provide a detailed description of d-grad and its (dis-)similarities with rotograd in the revised version of the manuscript\\n\\n---\\n**Experimental evaluation and representation aligment**\\nPlease refer you to our general response for a detailed response on the empirical evaluation of rotograd, which includes an ablation study to validate the Stackelberg formulation, and to the question regarding the representation alignment.\"}",
"{\"title\": \"General response to all reviewers (2/2)\", \"comment\": \"**Ablation study**\\nWe agree with the reviewers that the paper would benefit from an ablation study to better understand the role of each element in the problem formulation. Thus, we will include in the revised version of the paper several new sets of experiments:\\nFirst, to better understand the implications of the Stackelberg results on the stability of the training, we perform experiments with different learning rates and decays for the leader. As a special case we consider SGD with learning rate 1 and no decay, which corresponds to the case where we directly set the parameters at each step to closed-form solutions of Eq. 9. The following table provides a subset of the results we plan to add in the paper (for MNIST with different seeds, initial learning rates equal to 0.02 and 0.001 for respectively the leader and follower, and with different decays): \\n\\n| decay | Left \\ud83e\\udc51 | Right \\ud83e\\udc51 | Sum \\ud83e\\udc53 | Multiply \\ud83e\\udc53 | Density \\ud83e\\udc53 | \\u0394 \\ud83e\\udc51 |\\n| :-------------- | :--: | :---: | :--: | :------: | :-----: | :----------: |\\n| 0.9 | **89.46 (0.57)** | **85.90 (0.65)** | 4.64 (0.25) | **133.97 (7.20)** | 0.26 (0.06) | **0.18 (0.06)** |\\n| 0.99 | 88.99 (0.81) | 84.92 (0.97) | **4.58 (0.16)** | 135.19 (4.35) | **0.24 (0.05)** | **0.18 (0.06)** |\\n| 0.999 | 85.51 (1.55) | 80.09 (1.86) | 4.79 (0.18) | 142.07 (3.86) | 0.24 (0.08) | 0.15 (0.06) |\\n| 0.9999 | 84.56 (1.65) | 79.46 (1.80) | 4.81 (0.23) | 142.64 (4.07) | 0.24 (0.06) | 0.15 (0.06) |\\n| 1.0 | 83.55 (1.69) | 79.31 (2.43) | 4.88 (0.20) | 143.41 (3.17) | 0.25 (0.13) | 0.13 (0.09) |\\n\\nHere, we can observe that the faster the leader learning rate decays, the better the MTL results. Similarly, results where we directly update Rotograd\\u2019s parameters instead of performing iterative updates show this instability problems, as shown in the following table:\\n\\n| Method | Left \\ud83e\\udc51 | Right \\ud83e\\udc51 | Sum \\ud83e\\udc53 | Multiply \\ud83e\\udc53 | Density \\ud83e\\udc53 | \\u0394 \\ud83e\\udc51 |\\n| :-------------- | :-----------: | :-----------: | :---------: | :-----------: | :---------: | :----------: |\\n| single task | **93.50 (00.47)** | **90.65 (00.46)** | 6.44 (4.63) | 159.08 (6.16) | 1.62 (1.78) | |\\n| uniform | 90.15 (00.53) | 86.65 (00.41) | 5.14 (0.33) | 149.21 (5.97) | 0.51 (0.02) | 0.06 (0.11) |\\n| rotograd | 89.01 (00.87) | 84.62 (01.19) | **4.54 (0.19)** | **134.95 (5.92)** | **0.23 (0.04)** | **0.18 (0.06)** |\\n| rotograd (lr=1) | 64.73 (4.83) | 58.55 (5.94) | 6.11 (0.26) | 193.93 (9.53) | 0.89 (0.21) | -0.23 (0.21) |\\n\\nIn conclusion, the above results confirm the need of a slow-learning leader (see the section of necessity vs sufficiency below for an intuitive explanation). These are illustrative results, and more detailed versions of them will be added to the paper.\\n\\nIn addition, we plan to include additional experiments to: \\nUnderstand the sensitivity of the model capacity to negative transfer, by testing identical architecture but with different number of parameters (and thus, different model capacities).\\nShow the potential of rotograd. By performing a more thorough hyperparameter search on the learning rate and decay of the leader, we can improve the results shown in Table 2 of the original paper.\\nBetter understand how existing methods affect the cosine similarity of the gradients. To this end, we will include in Fig. 4 of the appendix the rest of the comparing methods.\\n\\nBy performing these changes, we expect to cover all the experimental concerns and further encourage the adoption of rotograd by the community.\\n\\n---\\n**Sufficiency vs. necessity**\\nWe find it necessary to remark that the stability results obtained from the Stackelberg formulation provide sufficient (and not necessary) conditions to achieve this stability. While we are able to consistently reach stable solutions in the toy examples (Fig. 3) by directly updating rotograd\\u2019s parameter to their closed-form solution, making sure that the leader is the slow learner (thus ensuring stability) becomes more and more important as we move to more complex datasets such as MNIST and, specially, ChestXRay. As mentioned above, we will include stability experiments to show the importance of the leader\\u2019s learning speed in non-trivial scenarios where this aspect becomes relevant.\"}",
"{\"title\": \"General response to all reviewers (1/2)\", \"comment\": \"We would like to thank the reviewers for providing us with such helpful feedback, which will ultimately improve our work. In general, all reviewers agreed on the importance of the problem we address, for example, reviewer 4 said that \\u201cminimizing gradient conflict is a well-motivated way to reduce negative transfer\\u201d. Moreover, all three reviewers expressed their interest and acknowledged the merit of the novel perspective based on Stackelberg games proposed in this work. Since several questions and concerns seem shared among the reviewers, we next address the common concerns. Specific answers to individual reviewer's comments are provided in separate answers.\\n\\n---\\n\\n**Representation alignment / Rotation matrix interpretation**\\nWe would like to remark that the goal of rotograd is to find both a common representation shared across all tasks, and a way to transform it to the individual tasks' representations. Our interpretation is that the rotation matrix acts as a \\u201ctranslator\\u201d between the common representation (which aims to jointly capture all the tasks and their relationships) and the task-specific ones (whose only objective is to optimize its individual loss function). While the shared representation performs the same role as any other shared representation in MTL, i.e., easing positive transfer (\\\"leakage\\\") between tasks, the rotation that maps it to the individual tasks' representations avoids potential negative transfer among tasks due to the \\\"disagreements\\\" among the individual objectives. \\n\\nImportantly, we would like to add that while we agree with the reviewer that, in absence of the GradNorm loss in Eq. 7, a perfect solution of Eq. 6 would decouple the learning between tasks preventing positive transfer among tasks, such a perfect solution can only be found in trivial settings, where the rotation transforms the shared representation into the task-specific one for all the observed samples (not only the samples in the current batch) with zero error. This is highly unlikely (if even possible) in our settings due to the stochasticity of the inputs (an in turn in the per-observation gradient evaluation), the stochasticity introduced by the batch, and the imperfect oracle functions (i.e., approximation of the oracles in Eq. 8). Moreover, as we also perform iterative (SGD) updates in the leader, even if a zero error solution of Eq. 6 existed, by the time that the the follower reaches its optimum, information across tasks would have already been shared in the shared representation (and thus, the shared parameters).\\n\\n---\\n**Clarifications / Rotograd-sgd** \\nIn order to improve the accessibility and readability of the paper, we will add more extensive explanations and further clarifications in the revised version of the paper. The description rotograd-sgd deserves special attention, as it seems to not be stated clearly enough in our experiments. The baseline rotograd-sgd shares the same learnable parameters as the standard rotograd and optimizes the same two objectives in Eq. 6. However, instead of solving Eq. 6 in closed-form (as the proposed rotograd does) and setting the gradient as the difference between the current point and this solution, it instead relies on automatic differentiation to evaluate the gradient of the parameters.\"}",
"{\"title\": \"Interesting idea but weak experiment implementation and lack of motivation for the proposed method\", \"review\": \"In the paper, Rotograd is proposed as a new gradient-based approach for training multi-task deep neural networks based on GradNorm. GradNorm is first formulated as a Stackelberg game, where the leader aims at normalizing the gradient of different tasks and the follower aims at optimizing the collective weighted loss objective. Under this formulation, one can utilize theoretical guarantees of the Stackelberg game by making the leader have a learning rate that decays to zero faster than the follower. To further account for the different gradient directions, a learnable rotation and translation are applied to the representation of each task, such that the transformed representation match that of the single-task learning. By adding an additional term accounting for learning this rotation, the leader in the Stackelberg game will minimize the loss to homogenize both the gradient magnitude and match the representation to single-task learning as close as possible.\\n\\nIn general, I find the direction of gradient homogenization for multi-task learning very important and interesting. The paper provides an interesting perspective through the Stackelberg game formulation, which provides a framework for selecting the learning rate of GradNorm type of gradient homogenization methods. The other contribution of the paper is a learnable task-specific rotation that aligns the task gradients with single-task learning. The proposing of a learnable rotation matrix seems an interesting idea, although I am not sure if it has been proposed previously for multi-task learning. \\n\\nI find the first contribution of formulating the problem as a Stackelberg game to be interesting and novel. However, in terms of the second contribution, I have some concerns about whether it makes the most sense by aligning the transformed representation with that of single-task learning. For MTL, one of the key benefits is learning a better representation by sharing it across different tasks to encourage helpful transfer between the tasks; by constraining the transformed representation to be as close to the single-task learning representation, it might limit the transfer between tasks since the representation are constrained to be equivalent to that learned by single-task learning. I think it is helpful to think about using rotation invariant representations for aligning the gradient directions, but it is questionable to align it to that of the single-task learning. \\n\\nAnother major concern is about the experimental results, full experiments are only conducted on one real-world dataset. The experiment on the second dataset seems to be very preliminary, which might not be sufficient to justify the proposed method empirically. Also on the second dataset, it seems the two different implementations of Rotograd have a large discrepancy in the results, which might need more investigation about why this happens. Meanwhile, many ablation studies seem to be missing. I am mostly interested to see experiments that validate the Stackelberg game formulation, for example by using different learning rates for the leader and the follower. Also, it would be interesting to see how the proposed Rotograd compares with pure GradNorm on gradient direction alignment. Overall, I feel the experiments are not complete for validating the effectiveness of the method.\", \"some_minor_points\": \"the description of d-grad method seems to be missing. Also, Yu et. al [2020] also deals with gradient aligning for MTL which could be considered as a baseline to compare with.\\n\\nYu, T., Kumar, S., Gupta, A., Levine, S., Hausman, K., & Finn, C. (2020). Gradient surgery for multi-task learning. arXiv preprint arXiv:2001.06782. \\n\\n--------After author's response----------\\n\\nI am not fully convinced by the explanation of the motivation behind rotation matrix, in particular why it is aligning with the single-task learning, which is counter-intuitive. The authors provided more ablation studies, however, the evaluation on datasets is still quite preliminary with some questions remaining (such as why there is a discrepancy between the two versions of Rotograd on the second dataset). Therefore I am keeping my original score.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"The writing needs improvement. The proposed idea is not well justified. The empirical results are weak.\", \"review\": \"This paper presents an extension of Gradnorm to address task conflicting due to discordant gradient direction. Specially it introduces a rotation matrix to rotate the hidden representation from the last shared layer. The authors put the proposed method in the context of game theory to show stability and convergence of the training, which might be of merit.\\n\\nThe writing of the paper doesn\\u2019t meet the publication standard, needing major work to improve. There are many typos and awkward sentences, hindering understanding of their work. Also, there are many places that need clarification, for example, in Proposition 4.1, the inverse of the gradient of Z with respective to \\\\theta needs to be calculated. So, what is the shape of this gradient matrix? How it is necessarily to be a square matrix? What ||\\\\Delta_{\\\\theta} Z|| represents? the F-norm? There is lack of adequate explanation of the motivation behind the objective in Eq. (6). By reading the paper, I have no idea about the two oracle functions, and why they are defined in the way shown in Eq. (8). \\n\\nEq. (3) is inaccurate, not aligning with that proposed in the GradNorm paper for the computation of L_{grad}^k.\\n\\nEq. (9) is problematic. Why R_k z_i^t does not appear in the objective function of the first optimization problem? If this is because z_i^{k,t} = R_k z_i^t + d_k, then the objective in the second optimization problem would be just 0. \\n\\nWhy operating on z instead of the gradient in Gradnorm can resolve the discordant gradient issue among tasks is not properly justified. \\n\\nThe reported empirical results are weak and do not support this method works as claimed.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review of Rotograd\", \"review\": \"Summary:\\n\\nThis paper proposes an MTL method that encourages the gradients on shared parameters to have similar directions across different tasks. The motivation is to reduce conflicts between gradients of different tasks, so that training can proceed more smoothly, and fit multiple tasks more easily. The paper introduces a new way of thinking about this kind of method, i.e., through the lens of Stackelberg games, which could be useful in reasoning about the convergence of such methods. The method is shown to perform favorably against related methods, especially in regression settings.\", \"strong_points\": \"Minimizing gradient conflict is a well-motivated way to reduce negative transfer.\\n\\nThe algorithm description is detailed, and should be straightforward for others to implement.\\n\\nStackelberg games are an interesting framework for thinking about methods like GradNorm and Rotograd that adaptively guide MTL training.\", \"weak_points\": \"The theory is interesting at a high-level, but it is not clear that it provides insights on what makes Rotograd work. In the paper, one main takeaway from the Stackelberg games framework is that the methods converge if the leader\\u2019s learning rate is asymptotically smaller than the follower\\u2019s. This takeaway is implemented by decaying the leader\\u2019s learning rate, but it is not shown that this is a key point required for Rotograd to work. I would not be surprised if the results were unaffected if this decay were removed. If this point is really important, it should be illustrated in ablation studies. More broadly, since the point does not only apply to Rotograd, this ablation could also be done on Gradnorm and other methods. Such ablations would be one way to connect the theory to the methods.\\n\\nAnother main takeaway from the theory is that the rotation matrices and translation vectors should be updated with gradient descent, instead of simply replacing them each step. Intuitively, the algorithm would still make sense and be simpler if R and d were simply replaced. Experiments showing that the gradient-descent update rule is necessary would help show the value of the theory.\\n\\nSimilarly, the value of Proposition 4.1 is not clear. Is it to prove stability? Does this have some particular connection to Rotograd, or is it a useful fact about hard parameter-sharing methods in general?\\n\\nThere is one ablation \\u201crotograd-sgd\\u201d, but it is not clear how exactly it works: Can it simply update R and d however it wants, or is Eq. 9 still used to regularize the updates in some way?\\n\\nBy adding the rotation matrices, it\\u2019s possible that information that would be useful to share across tasks is instead stored in these task-specific matrices. That is, conflict between tasks can beneficially lead to more general representations. Restricting R to be a rotation instead of any matrix is one step towards limiting the amount of information leakage into task-specific parameters. Is there a conceptual reason to expect that the benefits from reducing conflicts will outweigh this leakage?\\n\\nThe experiments are on an intentionally very small architecture, where one of the main issues is expressivity, which gives Rotograd an edge over methods that do not include an additional task-specific matrix. \\n\\nIn Section 5.1, does the method without Rotograd do poorly because there are no task-specific networks in that case?\\n\\nAlthough Rotograd is motivated to reduce negative transfer, Table 1 shows that Rotograd does not reduce negative transfer, but rather improves positive transfer. That is, uniform does better than rotograd in the tasks where single-task is better than multi-task, but rotograd does better than uniform in the tasks where uniform is already better than single-task. This makes me think that the benefits of Rotograd are not coming from reducing negative transfer, but from somewhere else.\\n\\nIs there an explanation for why Rotograd does not work as well for multi-class classification tasks (i.e., performs worse than all other methods for Left and Right)? Is it because the task-specific heads have larger output sizes? E.g., could it be better to have a separate rotation matrix for each class? Figure 4 in A.3 confirms that there is an issue here: the cosine similarity is not higher for rotograd for the classification tasks.\\n\\nOverall, from the limited scope of the experiments it is not clear that Rotograd would provide practical advantages over competing methods. The ChestXray experiments show that although Rotograd does not hurt much, it does not help overall compared too uniform.\\n\\nThat said, it would be still be interesting to see whether insights from Stackelberg games could lead to practical improvements for this problem.\", \"minor_comments\": \"The writing has some issues. These issues don\\u2019t make the work unclear, but they are a bit distracting. Some example suggestions for fixing distracting word choice: \\u201cpalliate\\u201d -> \\u201calleviate\\u201d, \\u201cspoiled\\u201d -> \\u201cnoted\\u201d, \\u201cwe have not being able to propose Rotograd, but also to derive\\u201d -> \\u201cwe have proposed Rotograd, and derived\\u201d. There is also frequent non-standard mixing of em dashes with spaces and commas.\\n\\n\\u201c$[r_k(t)]^\\\\alpha$ is a hyperparameter\\u201d -> \\u201c$\\\\alpha$ is a hyperparameter\\u201d The hyperparameter is \\\\alpha, correct?\\n\\n----\", \"update\": \"I am very happy to see the new experiments that validate the implications of the Stackelberg games theory. The main drawback of the paper is that it is not clear that direction homogenization could lead to practical improvements for multi-task learning. The additional experiments in Table 2 are useful, and suggest that much of the benefit comes from the greater expressivity due to task-specific matrices.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
JYVODnDjU20 | UNSUPERVISED ANOMALY DETECTION FROM SEMANTIC SIMILARITY SCORES | [
"Nima Rafiee",
"Rahil Gholamipoor",
"Markus Kollmann"
] | In this paper we present SemSAD, a simple and generic framework for detecting examples that lie out-of-distribution (OOD) for a given training set. The approach is based on learning a semantic similarity measure to find for a given test example the semantically closest example in the training set and then using a discriminator to classify whether the two examples show sufficient semantic dissimilarity such that the test example can be rejected as OOD. We are able to outperform previous approaches for anomaly, novelty, or out-of-distribution detection in the visual domain by a large margin. In particular we obtain AUROC values close to one for the challenging task of detecting examples from CIFAR-10 as out-of-distribution given CIFAR-100 as in-distribution, without making use of label information. | [
"Anomaly Detection",
"Out-of-Distribution Detection",
"Novelty Detection"
] | Reject | https://openreview.net/pdf?id=JYVODnDjU20 | https://openreview.net/forum?id=JYVODnDjU20 | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"UGLjotn078",
"AYNW3JeP62",
"gyDCt6juqv",
"A-AoijcNvQ7",
"u2wKa6AJLwp",
"OFs4jtfzN9k",
"XV11GF3hXxA",
"Rzoiqw9xkBu",
"H3tb-TnfPcA",
"4ITNaKYgFAz",
"xCuu7VxXG0I",
"krfrHCB7zCB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040366635,
1606215291153,
1606210605928,
1606209684814,
1606206864488,
1606166208445,
1606155585300,
1606146970892,
1604346388666,
1603990881003,
1603966572082,
1603898993645
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3513/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3513/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3513/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3513/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3513/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3513/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3513/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3513/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3513/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3513/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3513/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"The paper proposes a two stage approach for anomaly detection - first train a low dimensional embedding potentially using self-supervised learning methods, and then train a discriminator on top of the embedding that takes in pairs of examples and outputs a score which can be used for anomaly detection. A test example is paired with the next nearest neighbor. A common concern of the reviewers was on the claim of the paper to be a general approach for anomaly detection whereas experiments are reported only on vision datatsets. The authors have addressed this by making changes to the title and to the claims made in the paper. However R1 and R2 still have concerns about insufficient empirical evaluations, in particular lack of non-vision datasets.\\n\\nAs the paper aims to tackle the problem where OOD examples are spread through the sphere, appearing mixed with normal examples, I think fitting a nonparametric density model (eg, using KDE) or parametric density model (eg, a mixture model) on the embeddings is a natural baseline to compare with. \\n\\nI encourage the authors to strengthen the empirical section of the paper based on reviewers' comments and resubmit to a future venue.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"As indeed we have not shown any results outside the visual domain in our paper, we will remove the assertion in the abstract that the approach \\\"can be extended to a wide range of anomaly detection problems\\\". However, we want to emphasise that our results within the visual domain show strong improvements for difficult OOD detection tasks.\"}",
"{\"title\": \"Paper has been made somewhat better than earlier\", \"comment\": \"Some of my concerns have been addressed such as narrowing the scope as per the title, clarification on the degree to which the 'semantic information' has been captured by the model, and why close instances on the unit hyper-sphere might not share semantic information. I have increased my score by one point accordingly.\\n\\nHowever, I still find the number of datasets too few. I would encourage the authors to add additional datasets from at least one other domain (text/audio). It is easy to say (as mentioned in revised abstract) that the proposed technique can be extended widely to other types of data; in reality, it might be just very hard to define semantic neighborhood in an implementable manner for other types of data.\"}",
"{\"title\": \"Change in title\", \"comment\": \"As there was common agreement among the reviewers that the generality promised by the title is not supported by the results in the manuscript we decided to change the title and consequently the name of our method.\"}",
"{\"title\": \"Response to Reviewer 4\", \"comment\": \"We thank the Reviewer for the positive evaluation of our manuscript and for the helpful comments.\", \"here_our_detailed_response_to_the_major_comments\": \"1. We now included a literature review at the beginning of the paper and explain the methods that are most related to ours in more detail.\\n2. We now included the results of a sensitivity analysis (Table 2), where we changed transformation strengths and other hyperparameters of the model and reported the effect on AUROC values.\\n3. We now reference more consistently to Figs/Tables/Appendix to improve the readability of the paper.\", \"minor_comments\": \"1. we defined $d$ above Eq. 1\\n2. we corrected the quotation marks\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"We apologise to the Reviewer for being imprecise on several issues and hope that our approach is presented better in the revised version.\", \"here_our_detailed_response_to_the_main_comments\": \"1. We agree that almost all OOD approaches are based on feature extraction followed by binary classification, which is in fact the most natural approach to the problem. However, most other approaches assume or require that the in-distribution occupies a simply connected region in a lower dimensional (latent) space for optimal discrimination. In contrast, our method does not require a separable latent space for OOD detection. In our approach, the output of the encoding function $f(x)$ can be a \\u2018Swiss-cheese\\u2019 like latent space, where the in-distribution is mapped to 'holes' and the 'cheese' is OOD. The reason is that the mapping $h(x)=f(x)/||f(x)||$ projects in any case both in-distribution examples and OOD examples onto the lower dimensional surface of a unit-hypersphere, where in-distribution examples and OOD examples can lie next to each other. As the discrimination is carried out relative to a reference example (nearest neighbour) and not by a fixed decision boundary, the distribution of OOD examples on the unit-hypersphere is not relevant. \\n\\n2. We fully agree that we do not provide any evidence that our approach works outside the visual domain, although we can argue that contrastive methods have been successfully applied to NLP (Word2Vec) and Audio (Contrastive Predictive Coding). We therefore changed Title, Abstract, and the content of paper to narrow down our predictions to the visual domain.\\n\\n3. We apologise for the \\u2018all\\u2019 in \\u2018\\u2026 includes all semantic information\\u2019, which is certainly wrong and we have removed that. We can learn at most the information that is orthogonal to the transformations applied and although contrastive methods maximise the mutual information in theory, there is no sign that deep neural nets can approach this limit. However, we cannot follow the argumentation why our definition of \\u2018semantic neighbourhood\\u2019 is ill-defined. It is a direct consequence of the transformations used to train $h(x)$ and the cardinality of the semantic neighbourhood (if it\\u2019s 4 or 32) has only a minor effect (Table 2 in revised version) but should be a small number to maximise the amount semantic information that can be used for discrimination. The transformations are well defined, at least for images of objects. The strength of transformations are chosen such that positive pairs $(x,x\\u2019)\\\\sim P_{pos}$ get a higher score than any pair from the training set and negative pairs $(x,x\\u2019)\\\\sim P_{neg}$ get a lower score than any semantic similar pair form the training set (see Fig.4 in the revised version). \\n\\n4. Indeed the subnetworks likely share weights and are not independent. However, for an ensemble method to work that is not necessary. For example dropout as regularisation technique is effectively an ensemble method, averaging over exponentially many subnetworks during training that share weights. As the ensemble effect can indeed be expected to be larger for larger network size we use ResNet18/34 nets in the revised version and show the effect of our ensemble method in Appendix C.\\n\\n5. That two examples not sharing much semantic information are found next to each other on the unit-hypersphere is indeed counter intuitive. The reason is that the neural network $f(x)$ puts out a $d$ dimensional vector, where OOD examples and training examples can be found in different regions, as intuitively expected. However, $f(x)/||f(x)||$ projects $f(x)$ onto the lower dimensional surface of a unit-hypersphere with the effect that OOD and training examples can be mapped arbitrary close to each other, as the contrastive objective distributes examples almost uniformly across the unit sphere.\\n\\n6. Contrastive objectives require large batch sizes to work well. For the discriminator we reduced batch size to 128.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"We thank the reviewer for the fair and very helpful comments and for providing a list of references. Here our detailed response:\\n\\n \\\"As is, I recommend to reject this paper primarily due to a lack of clarity and missing details in the description of the approach, which makes it hard to assess the technical correctness and merit of GenAD.\\\"\\n\\nWe apologize for the lack of clarity. We now provide the details which transformations are used for training $h(x)$ and $s(x,x\\u2019)$ and in particular how $P_{pos}/P_{neg}$ is defined in the main text and Appendix D.\", \"response_to_minor_comments\": \"1) We changed title and abstract and make clear that we apply our method to the visual domain. \\n2) Yes. We resized the distribution plot.\\n3) Large batch sizes are needed for contrastive learning (see Chen et al 2020), but indeed not necessary for the discriminator, which we changed to 128.\\n4) Yes. \\n5) We cited now Cover & Thomas, Elements of Information Theory.\\n6) Probably we have given a too strong statement here and therefore removed it. However, consider as in-distribution colored MNIST, with the constraint that only one color channel is allowed to be non-zero. Grey-scaled MNIST images are OOD by construction but label information doesn\\u2019t help detecting them if labels are independent of color.\\n8) If the Reviewer agrees, we would like to keep the discriminator terminology.\\n9) We correct the spacing\\n10) We compared with methods that don\\u2019t use labels but there was still an error, which we corrected.\\n11) We corrected that\\n12) We now added a new figures to show semantic similar images and transformation strengths.\"}",
"{\"title\": \"Reply to Reviewer 3\", \"comment\": \"We thank the reviewer for the fair and helpful comments and spotting many tiny errors. Here our detailed response:\\n\\n1) \\u2026 idea is simple and well motivated. If (and it is a big if) the results are verified, this could be a very important paper in the field of OOD detection.\\n\\nIn addition to providing the code for the paper, we now report in Table 1 AUROC values that are averaged over 2x5 runs for 2 standard deep neural network architectures (ResNet18 and ResNet34, 5 runs each). We further carried out a sensitivity analysis by changing different hyperparameters and types of transformations (Table 2). Our results are robust if transformations for P_pos(P_neg) are sufficiently moderate(strong), respectively.\\n\\n2) Assumption is all OOD is semantic which may not always hold true \\u2026\\n\\nWe agree that our method needs prior assumptions to design the type and strength of transformations used to train the model. However, the transformations define how the model generalizes over the in-distribution and therefore is part of the inductive bias that underlies any OOD model (e.g. Generative Models). The systematic tuning of hyperparameters using an in-distribution validation set can also be applied to our method by choosing transformation that maximise the \\u2018semantic\\u2019 similarity of nearest-neighbour pairs without rejecting an in-distribution test. But indeed, the resulting factors of variations might not be related to what is typically described as \\u2018semantics\\u2019.\\n\\n3) Unclear why gamma (Yneg and Ypos) was introduced\\n\\nThe effect of using gamma as a stochastic variable is now shown in Appendix C\\n\\n4) Unclear how encoder and discriminator are trained\\n\\nWe now wrote how training is carried out and what networks have been used in the Training Section and in Appendix D. In short, h(x) is readily trained before s(x,x\\u2019) gets the semantically close pairs determined by h(x).\\n\\n5) Why does the discriminator enable learning of semantic dissimilarity?\\n\\nThe examples that make up a negative pair for training the discriminator are transformations of two different examples from the training set \\u2013 and the assumption is that transformations of two different examples are almost always more dissimilar than two independent transformations of the same example (if transformations are not extreme). See separation of the blue and red peak in Fig. 4.\\n\\n6) While algorithm for sampling of positives is specified how are negatives sampled?\\n\\nWe now clarify this point below Eq. 2. \\n\\n7) In Table 2, ablation corresponding to T(x) should be similar to results from (Winkens et al. 2020) right? \\n\\nWinkens et al. 2020 use a different approach, so the values cannot be compared.\\n\\n8) \\u2026 so do you not train the discriminator on top of contrastive representations? \\n\\nWe use two different Networks for training h(x) and s(x,x\\u2019) -- they use the same trunk architecture (ResNet18/34) but don\\u2019t share parameters. We clarify this now in the 'Training' section.\\n\\n9) Why was ADAM used instead of LARS as in Chen et al?\\n\\nWe tried LARS but don\\u2019t saw any improvement. So, we stayed with ADAM (or AMSGrad to be precise)\\n\\n10) Claim of general framework for OOD detection is strong as no results shown on non visual domains\\n\\nWe agree. We changed title and Abstract to make this clear from the beginning.\"}",
"{\"title\": \"interesting work but needs more clarification/verification on methods/details to validate the results\", \"review\": [\"Summary\", \"Presents GenAD as a general framework for anomaly detection\", \"Method builds on top of contrastive training and proposes to learn a discriminator to distinguish between semantically similar and dissimilar pair of examples\", \"Results are SOTA but need verification through code and methods clarification\", \"Clarity/Quality:\", \"Paper is overall written OK but several typos/grammatical errors as highlighted below:\", \"\\u201cFor visual data we show new state-of-the OOD classification accuracies for standard benchmark data sets\\u201d -> new state of the\", \"\\u201cart\\u201d OOD classification\", \"\\u201cThe contrastive objective aligns feature vectors h = h(x)\\u201d -> consider using different symbols for the vector output and the\", \"encoder function\", \"\\u201cA statically meaningful score\\u201d -> statistically?\", \"The notation \\u201cPneg(x, x\\u2019) = Ppos(x)Ppos(x\\u2019)\\u201d is unclear. Is marginalization implied? (end of page 3)\", \"\\u201cmainly affects the weights of a small subnetwork Frankle & Carbin (2019)\\u201d - missing parentheses around reference\", \"\\u201cIf we belief in the lottery hypothesis\\u201d -> belief to believe\", \"\\u201cWe expect to see a significant increase in OOD detection performance upon increasing network size, which left to future work.\\u201d\", \"-> which is left to future work\"], \"novelty\": \"Central claim - Contrastive training maps example to unit hypersphere but it is possible OOD examples can be in same neighborhood. Hence need a semantic discriminator and introduces it along with algorithms for sampling positives/negatives.\", \"significance\": [\"The central idea is simple and well motivated. If (and it is a big if) the results are verified, this could be a very important paper in the field of OOD detection.\", \"Questions/Comments/Clarification\", \"Assumption is all OOD is semantic which may not always hold true especially if there are stylistic varaiations introduced using different imaging equipment\", \"Unclear why gamma (Yneg and Ypos) was introduced\", \"Unclear how encoder and discriminator are trained? Is it jointly or separately? Are these the same networks? Architecture diagram for network setup is needed to clarify details\", \"Why does the discriminator enable learning of semantic dissimilarity?\", \"While algorithm for sampling of positives is specified how are negatives sampled?\", \"In Table 2, ablation corresponding to T(x) should be similar to results from (Winkens et al. 2020) right? However the\", \"corresponding values are much higher (78.3 vs 89.3). The only difference seems network sizes. Not sure how these results came\", \"about?\", \"In Appendix C2 - \\u201cTo train the discriminator s(x, x0 ), we use almost the same network structure as our contrastive encoder but\", \"with smaller width and the MLP layer projects to a scalar output.\\u201d -> so do you not train the discriminator on top of contrastive\", \"representations? If yes, then how is the network pruned to smaller width?\", \"Why was ADAM used instead of LARS as in Chen et al?\", \"Claim of general framework for OOD detection is strong as no results shown on non visual domains.\", \"Overall, this is an interesting idea but the method needs a lot more clarification and results need verification. Would encourage authors to share code to help verify the methods/results.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review AnonReviewer2\", \"review\": \"**UPDATE**\\n\\nI acknowledge that I have read the author responses as well as the other reviews. I appreciate the clarifications and improvements made to the paper and have increased my score 5.\\n\\nMy concerns about the generality of the framework (as also pointed out by Rev1) still hold, however, as an evaluation on non-image data is still missing. I encourage the authors to extend their work further into this direction, but as is, I would keep my recommendation to reject.\\n\\n#####\\n\\n**Summary**\\n\\nThis work presents a generic approach for out-of-distribution (OOD) detection or anomaly detection (AD) called GenAD. GenAD consists of two steps: First, (i) learning a spherical representation via contrastive learning to capture semantic similarities, followed by (ii) training a classifier to discern between semantically similar and dissimilar pairs of samples, given the representation from (i). An experimental evaluation on in-distribution vs. out-of-distribution dataset pairs (CIFAR-10 vs. SVHN, CIFAR-10 vs. CIFAR-100, CIFAR-100 vs. CIFAR-10) is presented which shows that GenAD outperforms previous OOD methods on these settings.\\n\\n\\n**Pros**\\n+ OOD detection is an important open problem that is relevant and of interest to the community.\\n+ GenAD seems to improve over previous methods in the visual domain.\\n+ GenAD, in principle, is applicable to general types of data (e.g., images, audio, text, etc.).\\n\\n\\n**Cons**\\n- There are some critical details missing about the specific choices made for sampling negative pairs, which makes it hard to assess the technical correctness and merit of the presented approach. In general, I find it hard to follow and exactly understand all the relevant details from reading the description of the method in Section 2.\\n- Though the applicability of the approach to general types of data is emphasized, the experimental evaluation only includes image data.\\n- Some recent related work from the out-of-distribution [6, 9, 10, 5] and deep anomaly detection [7, 3, 1, 2, 4, 8] lines of research are missing which also study representations that are effective for detecting semantic out-of-distribution samples and propose various solutions.\\n\\n\\n**Recommendation**\\n\\nAs is, I recommend to reject this paper primarily due to a lack of clarity and missing details in the description of the approach, which makes it hard to assess the technical correctness and merit of GenAD.\\n\\nIn particular, how are $P_{pos}$ (via transformation or neighborhood or both?) and $P_{neg}$ exactly modeled in the experiments?\\nIn Section 2.2, $P_{neg}$ is defined as the product of positive marginals, but how is this implemented?\\nHow are the negative minibatch $\\\\{x_k^r\\\\}_{k=1}^N$ and the negative set of transformations in $T^{negative}$ in Algorithm 1 defined and chosen?\\n\\nThese details should be clarified and explained.\\n\\n\\n**Additional feedback and ideas for improvement**\\n- Include the missing details and try to explain the approach more clearly (there is one page of space currently left).\\n- Include other types of data in the experimental evaluation, which would strengthen the generality claim of the proposed approach.\\n\\n\\n**Minor Comments**\\n\\n1. The title of the paper is very generic.\\n2. The figures in the paper are disproportionately large and waste quite some whitespace.\\n3. The batch sizes reported in the experiments are uncommonly large (1024, 2048). What is the reason for this choice?\\n4. I think Algorithm 2 can be removed, as it just describes $k$-NN using cosine similarity, right?\\n5. Section 1: \\u2018Note that it is possible for a datapoint to have high likelihood under a distribution yet be nearly impossible to be sampled, a property known as asymptotic equipartition property in information theory.\\u2019 Citation?\\n6. Section 1: \\u2018Intuitively, the OOD detection problem should be independent of the hardness of an in- distribution classification task.\\u2019 Why? I could imagine the hardness of an in-distribution classification task can be due to a complex in-distribution, for which the OOD detection problem is also more difficult.\\n7. Make use of page 8 in the main paper, e.g. move interesting claims and derivations to the main paper.\\n8. Section 3.1: \\u2018[...] - for both the encoder $f(x)$ and the *classifier* $s(x,x\\u2032)$.\\u2019 I would avoid to use the discriminator term.\\n9. Table 1: Add space between method names and citations.\\n10. Section 4: \\u2018[...], with increase in state-of-the-art AUROC from 0.783 to > 0.999.\\u2019 What about the 0.856 of OpenHybrid in Table 1?\\n11. Section 4: \\u2018Note that $h(x)$ *encodes features* of semantic similarity but not necessarily *features that allow* to score semantic dissimilarity.\\u2019\\n12. Section 4: \\u2018In fact, we observe for CIFAR-100 that examples from the same semantic neighbourhood do not always share the same label.\\u2019 Could you include some example images?\\n\\n\\n#####\\n\\n**References**\\n\\n[1] F. Ahmed and A. Courville. Detecting semantic anomalies. In AAAI, pages 3154\\u20133162, 2020.\\n\\n[2] L. Bergman and Y. Hoshen. Classification-based anomaly detection for general data. In ICLR, 2020.\\n\\n[3] I. Golan and R. El-Yaniv. Deep anomaly detection using geometric transformations. In NeurIPS, pages 9758\\u20139769, 2018.\\n\\n[4] S. Goyal, A. Raghunathan, M. Jain, H. V. Simhadri, and P. Jain. DROCC: Deep robust one-class classification. In ICML, pages 11335\\u201311345, 2020.\\n\\n[5] P. Kirichenko, P. Izmailov, and A. G. Wilson. Why normalizing flows fail to detect out-of-distribution data. arXiv preprint arXiv:2006.08545, 2020.\\n\\n[6] A. Meinke and M. Hein. Towards neural networks that provably know when they don\\u2019t know. In ICLR, 2020.\\n\\n[7] L. Ruff, R. A. Vandermeulen, N. Go\\u0308rnitz, L. Deecke, S. A. Siddiqui, A. Binder, E. Mu\\u0308ller, and M. Kloft. Deep one-class classification. In ICML, pages 4393\\u20134402, 2018.\\n\\n[8] L. Ruff, J. R. Kauffmann, R. A. Vandermeulen, G. Montavon, W. Samek, M. Kloft, T. G. Dietterich, and K.-R. Mu\\u0308ller. A unifying review of deep and shallow anomaly detection. arXiv preprint arXiv:2009.11732, 2020.\\n\\n[9] R. T. Schirrmeister, Y. Zhou, T. Ball, and D. Zhang. Understanding anomaly detection with deep invertible networks through hierarchies of distributions and features. arXiv preprint arXiv:2006.10848, 2020.\\n\\n[10] Z. Wang, B. Dai, D. Wipf, and J. Zhu. Further analysis of outlier detection with deep generative models. arXiv preprint arXiv:2010.13064, 2020.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Recommendation to Accept\", \"review\": \"##########################################################################\", \"summary\": \"The authors present a new Algorithm for performing unsupervised anomaly detection in diverse applications such as visual, audio and text data. They propose a two-step method in which first they utilise contrastive learning in order to find a semantically dense map of the data onto the unit-hypersphere. Then, they classify neighbouring pairs of test examples as in- or out-of- distribution based on the amount of the shared semantic information. Finally, they show that in several anomaly detection problems in the field of visual data their proposed method outperforms several existing methods.\\n\\n##########################################################################\", \"reasons_for_score\": \"I recommend to accept the paper since the authors deal with an important problem and they propose a clear and well-written method that outperforms in their empirical applications, at least, several existing approaches. Please find below cons that I suggest the authors to address in the rebuttal period.\\n\\n##########################################################################\", \"cons\": \"1) Although the authors refer to several existing anomaly detection methods I would suggest to add a separate and relatively small literature review section in the paper. In that section the authors should list the most relevant, existing, anomaly detection methods and briefly explain them. This will improve the readability of the paper.\\n\\n2) The authors identify that the main limitation of the proposed approach is the definition of a semantic similarity which in some applications can be very difficult. Therefore, I suggest the authors to perform a sensitivity analysis of their results with respect to the transformations that they use. I propose to add one or two tables similar to Table 1 in which they will compare versions of their method resulting from using different/misspecified transformations with the competing methods. They could for example add some 'noise' in the transformation that they use and re-perform the comparisons.\\n\\n3) The authors should make, within their main text, reference to the Figures and the Algorithms that they present. By giving briefly the utility of each of their Figures and Algorithms they will improve substantially the readability of the paper.\\n\\n##########################################################################\", \"minor_comments\": \"1) Define d in 'd-dimensional' in page 2. \\n\\n2) Conduct an extensive search for typos, correct for example the punctuation in 'everything that is not noise' at the bottom of page 7.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"A dimensionality reduction based approach to anomaly detection for images that tries to overcome certain disadvantages of autoencoders\", \"review\": \"The paper proposes an OOD detector algorithm that first learns a function to reduce the data dimensionality followed by learning a classifier discrimination model to separate in-distribution data from OOD.\", \"pro\": \"1. The paper compares many baseline algorithms\\n2. The paper tries to address an important problem (OOD detector)\", \"con\": \"1. The paper title is 'A General Framework...', however, the few datasets selected for experiments represent a very narrow domain. The paper title should be narrowed down or more domains should be included in experiments.\\n2. There are gaps in the intuitions such as why would two instances in the same neighborhood in the reduced dimension not be expected to have similar labels.\", \"main_comments\": \"1. The overall approach is that of reducing the dimensionality of the data by projecting it onto a lower dimensional manifold (surface of hyper sphere) and then using a discriminator. This approach is not novel in general.\\n\\n2. While the paper claims that this is a general technique, it depends on the concept of 'semantic neighborhood' for which it only provides CIFAR variants as evidence. We do not know (contrary to claims) whether it might work on other types of data (audio, text, etc.)\\n\\n3. Section 4: \\\"Our interpretation ... includes all semantic information ... helps OOD detection. In contrast, learning from label information ... mainly the semantics that help predicting labels.\\\" -- The paper does admit that the 'semantic neighborhood' is ill defined (Section 6, Conclusion). Yet the paper assumes, in Section 4, that the proposed technique (using pairwise distance metric) learns it well for the image data it was tested on. It is hard to see how this interpretation is justified. My assumption is that the algorithm has only learned what is necessary for the task of OOD just as a classification algorithm will learn what is necessary for labeling. There are many critical decisions that have gone in to design the proposed OOD detector (such as the distance metric to use, which features to use for the discriminator, etc.). It is more conceivable that in the end the algorithm has learned just enough representation that makes the combined design choices work well on the specific dataset. It is hard to generalize given that the experiments cover so few datasets. I suggest the paper remove 'semantic neighborhood' terminology.\\n\\n4. Section 2.2: \\\"...belief in the lottery hypothesis...\\\" -- Many of the subnetworks might be sharing weights and are therefore not independent. This point becomes more important because as discussed in Section 3.1, a small network was used which increases the likelihood of weight-sharing. So, the true ensemble effect might be absent in reality.\\n\\n5. Section 2.2: \\\"The idea is now to make use of the fact that nearby examples on unit-hypersphere share semantic information if both come from the in-distribution but don\\u2019t share semantic information if one of the two examples is OOD.\\\" -- It is not clear to me why any two close examples would not share semantic similarities assuming that the mapping function is smooth. In case the contrastive objective results in such as case, then we might have very noisy labeled data.\\n\\n6. Section 3.1: \\\"We train at batch sizes of either 1024 or 2048 using ADAM optimizer.\\\" -- These batch sizes are quite large than conventional (e.g. 32, 64). Is there a reason for that?\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
H-AAaJ9v_lE | Legendre Deep Neural Network (LDNN) and its application for approximation of nonlinear Volterra–Fredholm–Hammerstein integral equations | [
"Kourosh Parand",
"Zeinab Hajimohammadi",
"Ali Ghodsi"
] | Various phenomena in biology, physics, and engineering are modeled by differential equations. These differential equations including partial differential equations and ordinary differential equations can be converted and represented as integral equations. In particular, Volterra–Fredholm–Hammerstein integral equations are the main type of these integral equations and researchers are interested in investigating and solving these equations. In this paper, we propose Legendre Deep Neural Network (LDNN) for solving nonlinear Volterra–Fredholm–Hammerstein integral equations (V-F-H-IEs). LDNN utilizes Legendre orthogonal polynomials as activation functions of the Deep structure. We present how LDNN can be used to solve nonlinear V-F-H-IEs. We show using the Gaussian quadrature collocation method in combination with LDNN results in a novel numerical solution for nonlinear V-F-H-IEs. Several examples are given to verify the performance and accuracy of LDNN. | [
"Deep neural network",
"Volterra–Fredholm–Hammerstein integral equations",
"Legendre orthogonal polynomials",
"Gaussian quadrature method",
"Collocation method"
] | Reject | https://openreview.net/pdf?id=H-AAaJ9v_lE | https://openreview.net/forum?id=H-AAaJ9v_lE | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"9nltW7EvRLY",
"fzSCunv1a_A",
"hgQCu2YW43T",
"SvxY61gw6H"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040513463,
1604708656390,
1603853204341,
1603462317057
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3512/AnonReviewer5"
],
[
"ICLR.cc/2021/Conference/Paper3512/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3512/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"All three reviews for this paper were negative, and the authors did not provide rebuttals or comments on the reviews. The main drawback of this work identified by the reviewers is that the empirical study is not sufficient (e.g., limited comparisons and ablation studies as well as low-dimensional examples).\"}",
"{\"title\": \"Review\", \"review\": \"** Summary **\\n\\nThe authors present a neural network based method to solve a special class of integral equations. Their approach involves training a neural network with Legendre polynomial based activation functions to approximate the solution $y(x)$ for a given $x$. The network is trained in a supervised fashion to minimize a loss function with two term- (1) the $\\\\ell_2$ error between the true solution and $y(x)$ and (2) the residual of the given integral equation when analysed at $x$. They show impressive numerical results for several instances of VFH-IEs with very low errors. The primary contributions as claimed by the authors are the use of Legendre polynomial based activation functions and creating a differentiable approximation for the integral equation by using Legendre polynomials and Quadrature methods to analyse the integral. \\n\\n*** Pros ***\\n1. The numerical results show great efficiency and same to perform at par or better compared to other numerical methods reported in literature. \\n2. The use of Legendre polynomials as an activation function to approximate the input domain is an interesting method to introduce well understood approximations from the numerical methods community.\\n\\n*** Cons ***\\n1. The paper lacks comparisons and ablation studies to show how their model compares to simple supervised training. For example, a simple baseline comparison would be to train a network with similar number of parameters and standard loss functions in a supervised fashion and without the IE residual. This would allow us to analyse the efficacy of the various components of the proposed architecture better.\\n\\n2. How does the proposed method improve upon traditional numerical methods? I also would like to know the timing comparisons between traditional methods and the proposed neural network method.\\n\\n3. For Figures 2 and 3, the error between $y_{true}$ and $y_{pred}$ should be plotted as well.\\n\\nThe paper in its current form is not addressing how and why neural networks improve performance over the traditional methods and is also missing relevant comparisons and ablation studies. I will be willing to change my score if the authors add the required experimental results.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"No improvements in both method and experiments\", \"review\": \"The paper proposed the Legendre Deep Neural Network (LDNN) to solve Volterra\\u2013Fredholm\\u2013Hammerstein integral equations. Specifically, the network uses Legendre polynomials as the activation in the first layer and uses Gaussian quadrature to discretize the integral operator as a summation. The numerical examples are performed to verify the performance of LDNN. However, the method is not novel and the numerical examples are too simple.\", \"major_comments\": [\"The proposed method is the same as the physics-informed neural network (PINN) for solving integral PDEs proposed in https://arxiv.org/abs/1907.04502, but this is not mentioned in the paper. In fact, the method proposed in https://arxiv.org/abs/1907.04502 is more general than the method in this paper and can solve more types of PDEs.\", \"The only difference is that here the first layer of the network uses Legendre polynomials as the activation, but there is not any evidence in the paper that if using Legendre polynomials would make a significant difference.\", \"The integral equations solved in this paper are very simple. The equations only have one or two integrals, the problem is one dimension, and the solutions are quite smooth. In https://arxiv.org/abs/1907.04502, an integro-differential equation is solved. The same method has also been used to solve the 1D/2D/3D time-fractional/space-fractional/time-space-fractional PDE in a complicated geometry for both forward and inverse problems (https://doi.org/10.1137/18M1229845), which is much harder than the problems solved in this paper.\"], \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Interesting direction, motivation for method must be stronger\", \"review\": [\"Overview: The paper proposes to solve the Volterra-Fredholm-Hammerstein integral equations using a form of neural networks: Legendre Deep Neural Networks which use Legendre polynomials in combination with a neural network to model the solution. The idea and direction of the paper is interesting, however both the theory and numerical results could be improved and specifically it should be made more clear *what* is the added benefit of using this LDNN model as opposed to any other method for solving the integral equations. I suggest some ideas for improvements below.\", \"Ideas for improvement, comments and questions:\", \"In general I like the idea of combining collocation, Legendre polynomials and neural networks.\", \"For readability (especially for people less familiar with these methods) it would be good to explain some concepts more clearly (see also next points).\", \"Are the last parameters \\\\zeta_1 and \\\\zeta_2 fixed or trainable? The way it looks in the equation now is since you refer to it as the \\u2018second network\\u2019 that they are trainable, but in Section 3 you mention they are fixed. What are the trainable parameters of the third layer?\", \"How are the roots $X_j$ and consequently $\\\\omega_j$ computed?\", \"The numerical results look good and I see the potential of the method.\", \"What is the intuition behind the FC layers after the Legendre layer? What would more hidden layers mean in terms of the approximation? Some numerical results on this would be useful.\", \"Related to the above, how does the LDNN method compare to the collocation method and are W^2 similar to the role of the coefficients of the basis functions?\", \"I think the case of *why* we would be using the LDNN method instead of other more classical approaches should be made a lot stronger for the paper to be of significant contribution. Right now it is not clear for me how the proposed method is better than classical methods. What is the relation to methods which, as you mention, \\u201can attempt is made to obtain the unknown coefficients of the basis functions so that the solution satisfies the equation in a set of candidate points\\u201d.\", \"A small comment but the English of the paper should be improved.\", \"Maybe introduce Legendre polynomials prior to defining the network?\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
eBHq5irt-tk | Rethinking Parameter Counting: Effective Dimensionality Revisited | [
"Gregory Benton",
"Wesley Maddox",
"Andrew Gordon Wilson"
] | Neural networks appear to have mysterious generalization properties when using parameter counting as a proxy for complexity. Indeed, neural networks often have many more parameters than there are data points, yet still provide good generalization performance. Moreover, when we measure generalization as a function of parameters, we see double descent behaviour, where the test error decreases, increases, and then again decreases. We show that many of these properties become understandable when viewed through the lens of effective dimensionality, which measures the dimensionality of the parameter space determined by the data. We relate effective dimensionality to posterior contraction in Bayesian deep learning, model selection, width-depth tradeoffs, double descent, and functional diversity in loss surfaces, leading to a richer understanding of the interplay between parameters and functions in deep models. We also show that effective dimensionality compares favourably to alternative norm- and flatness- based generalization measures. | [
"effective dimension",
"hessian",
"generalization",
"double descent"
] | Reject | https://openreview.net/pdf?id=eBHq5irt-tk | https://openreview.net/forum?id=eBHq5irt-tk | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"-gLnbHAtIM",
"ZgxdwIFUigQ",
"A_cMPUFkJmf",
"gh7BO87g-YS",
"ku4s3TbGW4R",
"aqjKXwfKcDk",
"z4-i7vqH1n2",
"J5nJSKq8kD",
"iC9YsDs_Rz",
"gN8fXv_lMd",
"_08OIjv8V6f",
"W5BOweb0sva",
"w7Fzl3jMUU",
"CNy4lQWoIEa",
"46sZF6VHW84"
],
"note_type": [
"comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1613518036837,
1610040470242,
1605829934481,
1605829713915,
1605829667279,
1605829640091,
1605829479283,
1605829412528,
1605829281424,
1605829232278,
1605829159532,
1603997659181,
1603940033962,
1603870419817,
1603148073469
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Paper3507/Authors"
],
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3507/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3507/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3507/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3507/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3507/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3507/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3507/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3507/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3507/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3507/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3507/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3507/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3507/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"We respectfully disagree with this assessment\", \"comment\": \"We respectfully disagree with this assessment, as follows:\\n\\n(1) This paper represents the first time any generalization measure has been successfully used to track or provide insights into double descent, which is a *substantial* contribution. Double descent is one of the most widely visible and poorly understood phenomena in modern deep learning.\\n\\n(2) Similarly, this is one of the first and only works to actually consider width-depth trade-offs in neural network generalization. Most recent works focus exclusively on width. We show here how width-depth trade-offs for many sizes of convolutional networks interact for generalization, how effective dimension can be used to track these trade-offs, and how parameter counting misses a lot of relevant information in determining generalization. This is a substantial contribution. \\n\\n(3) We show that effective dimension is a compelling alternative to the *most successful modern generalization measures*, including PAC-Bayes flatness measures and spectral norms, selected from a thorough empirical study of many measures (Jiang et al. 2019).\\n\\n(4) We show how effective dimension and functional homogeneity in subspaces given by Hessian eigenvectors can be used to explain how it is surprisingly possible to dramatically compress neural networks through pruning or subspace inference. Compression is of great practical importance, and it has not been previously understood. Therefore these insights also form a substantial contribution.\\n\\n(5) This paper meticulously addresses parameter counting as a proxy for model complexity, which pervades the narrative in modern deep learning, and is behind many phenomena that are considered to be surprising, such as double descent. Even the popular expression \\\"overparametrization\\\" is an artifact of parameter counting. Many works on deep learning generalization open by expressing surprise at how models with more parameters than data points can provide good generalization. Addressing this narrative head-on is a substantial contribution.\\n\\n(6) All of the experiments in our paper are new contributions. The claim that an experiment is not new is not substantiated in the meta-review and is not true.\\n\\n(7) Effective dimensionality predates the work of MacKay (1992), and it is not tied specifically to a Laplace approximation. As we detail in the paper, effective dimensionality has a rich history, which includes the work of Cleveland (1979) and Gull (1989). Our paper is not at all about looking at one part of an approximation but not another part. Moreover, when models all have similar training loss, data fit terms are not relevant. Modern deep architectures -- the subject of this paper -- all have near-zero training loss. However, incidentally to our paper, ED does tend to be more reliable than Laplace approximations in comparing modern deep architectures, and we will highlight this point in future revisions.\"}",
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"The authors re-state Mackay's definition of effective dimensionality and describe its connections to posterior contraction in Bayesian neural networks, model selection, width-depth tradeoffs, double descent, and functional diversity in loss surfaces. The authors claim the effective dimensionality leads to a richer understanding of the interplay between parameters and functions in deep neural networks models. In their experiments the authors show that effective dimensionality compares favourably to alternative norm- and flatness- based generalization measures.\", \"strengths\": \"1 - The authors include a description of how to compute a scalable approximation to the effective dimensionality using the Lanczos algorithm and Hessian vector products.\\n\\n2 - The authors include some novel experimental results showing the effective dimensionality with respect to changes in width and depth. These results are informative in how changes in depth and width affect this metric in a different way. The same for the experiments with the double descent curve.\", \"weaknesses\": \"1 - For some reason the authors seem to have taken the concept of effective dimensionality from David Mackay's approximation to the model evidence in neural networks and ignored all the extra terms in such approximation. It is currently unclear why there is a need to do this and focus only on the effective dimensionality. Almost all the experiments that the authors describe could have been done using a similar approximation to Mackay's model evidence. It is unclear why is there a need to focus just on a part of Mackay's approximation. The fact that the authors state that the effective dimensionality is only meaningful for models with low train loss seems indicative that David Mackay's approximation to the model evidence would be a better metric.\\n\\n2 - With the exception of the experiments for changes in the effective dimensionality as a function of the depth and width and the double descent curve, all the other experiments and results are expected and not new to anyone familiar with David Mackay's work.\\n\\n3 - The experiments on depth and width are for only one dataset and may not be representative in general. The authors should consider other additional datasets. \\n\\nThe authors should improve the paper, including a justification for using only the effective dimensionality and not David Mackay's approximation to the model evidence. They should also strengthen the experiments by comparing with David Mackay's approximation to the model evidence and should consider additional datasets as mentioned above.\"}",
"{\"title\": \"Response to Reviewer 1 (cont.)\", \"comment\": \"Thank you for the detailed comments, below are our responses to them, which we\\u2019ve clarified in the updated version.\", \"q\": \"Section 5.1 Effective dimensionality doesn\\u2019t track well if the training loss is high.\", \"a\": \"Indeed, as we explain above, effective dimension works best for model comparison when the models being compared both have similarly low training loss. Then the models can be viewed as essentially lossless compressions of the data -- and the one that provides the best compression, and hence has the lowest effective dimension, will tend to provide the best generalization.\\nWe\\u2019ve updated the paper to be a bit more careful about these claims to state \\u201ctracks remarkably well with generalization amongst models with low training loss\\u201d.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"We appreciate your thoughtful and supportive review. We hope that our response and revisions have helped alleviate your concerns and have helped to connect the theoretical and empirical contributions. We want to emphasize that our contributions are not purely theoretical, or empirical, but both --- and that both contribute synergistically to the paper.\", \"we_also_want_to_emphasize_that_the_paper_is_making_several_very_deep_and_timely_contributions_that_are_all_interconnected\": \"(1) for the first time tracking double descent with a generalization metric, (2) exploring generalization as a function of depth (which has largely been ignored, with width instead as a focus, despite the practical significance of depth for generalization), (3) providing insights into effective dimension as model compression and links to Bayesian posterior contractions, (4) providing important contributions to the pervasive parameter counting narrative in contemporary deep learning; (5) explaining why subspace compression methods in deep learning are effective through the lens of effective dimensionality (these methods have been highly mysterious despite their practical success) by exploring properties of function-space; (6) showing that effective dimension actually provides a very competitive generalization measure relative to several generalization measures that have been isolated as high performing in recent literature. We hope you can consider the importance, timeliness, and synergy of these contributions, in considering your final assessment.\"}",
"{\"title\": \"Response to Reviewer 3 (cont.)\", \"comment\": [\"Effective dimensionality and generalization on page 2:\", \"When two models have the same training loss they can be viewed as providing a compression of the training data at the same fidelity, in which case the model which has the lower effective dimensionality, and thus provides the better compression --- capturing more regularities --- will tend to generalize better. Flatter solutions lead to lower effective dimensionality and also have been connected in many studies with better generalization [1, 2, 3, 4]. We also show in Section 4.3 that lower effective dimensionality provides a better Occam\\u2019s factor and shorter minimum description length.\", \"If we do not hold training loss constant when comparing models, then it is possible a model could achieve a lower effective dimension simply by extracting less information in the training data, which would typically not be predictive of generalization.\", \"We will make more clear the caveat that it is most interpretable to compare effective dimensionality for models with similar training loss. We also now signpost in the introduction our explanations for why lower effective dimensionality can lead to better generalization in section 4.2 and 4.3, with respect to minimum description length and Occam factors, and connections between flatness and generalization.\", \"Regarding the notation of $\\\\Phi^T \\\\beta$: You are correct --- we have made the necessary changes to keep notation consistent.\", \"Regarding page 6, equation 3: $\\\\mathcal{H}_{\\\\theta}$ is the Hessian of the log posterior, not the likelihood, so we should not have issues with a zero determinant. We have made sure to clarify this in the text.\", \"Hessian of the negative log likelihood: We only mention the effective dimensionality of the Hessian of the likelihood in the context of Equation 1. Typically we are looking at the Hessian of the loss which we note in Section 2.1 is the negative log posterior. We have clarified this point in the text to stave off further confusion.\", \"Appendix page 18: Good catch --- this was a typo we have now fixed in the paper.\", \"[1] Averaging weights leads to wider optima and better generalization, Izmailov et al. 2018\", \"[2] A simple baseline for bayesian uncertainty in deep learning, Maddox et al. 2019\", \"[3] On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima, Keskar et al. 2017\", \"[4] Flat Minima, Hochreiter and Schmidhuber 1997\"]}",
"{\"title\": \"Response to Reviewer 3 (cont.)\", \"comment\": [\"In response to the comments and questions:\", \"Effective dimensionality decreasing with model size:\", \"We do actually have an explanation in section 5.1: \\u201cas the dimensionality of the parameter space continues to increase past the point where the corresponding models achieve zero training error, flat regions of the loss occupy a greatly increasing volume, and are thus more easily discoverable by optimization procedures such as SGD. These solutions have lower effective dimensionality, and thus provide better compressions of the data, as in Section 4.3, and therefore better generalization\\u201d. Moreover, larger models (models which are wider and deeper) tend to have larger capacity and be more expressive, and thus can find higher performing compressions of the data (e.g. they contain good subnetworks), which leads to decreased effective dimensionality. We will clarify these points earlier on.\", \"Stability to training runs + computing dominant eigenvalues: Effective dimensionality is very stable across different training runs. The ranking of models by effective dimensionality stays consistent with generalization in repeated trials. In practice for large neural networks we see an eigenspectrum of the Hessian that contains a small number of large eigenvalues followed by many eigenvalues that are approximately 0. To capture the general behavior of the eigenspectrum of the Hessian we need only compute this small number of eigenvalues. Therefore to save on computations while retaining an accurate estimate of effective dimensionality we compute the top 100 eigenvalues of the Hessian.\", \"Inspired by your question, we have also added Figure A.16 Appendix as an example of stability of training runs and the consistency of effective dimension when using more than 50 eigenvalues.\", \"For cases where we only examine a single mode in the loss surface (i.e. ML, MAP, Laplace approximations, [1], [2]), effective dimensionality is consistent with the construction of the model. Extending concepts like effective dimensionality to models that consider multiple modes in the loss surface is an interesting direction for future work, but quite different from the intention of our paper, which is in part to show how effective dimension can be an informative metric for models trained in a standard way. We have further comments about ED and multimodal posteriors in the opening of our response.\", \"Clarity of effective dimensionality of a matrix vs of the model: Thank you for the question. We have clarified this point in the paper, keeping consistent distinctions between effective dimensionality as a function of a matrix, and effective dimensionality of the Hessian of the loss.\"]}",
"{\"title\": \"Response to Reviewer 3 (cont.)\", \"comment\": [\"We appreciate your description of strengths. We would like to open with responses to the cons you have listed.\", \"We agree that the effective dimensionality [ED] looks at local behaviour within a mode. However, we would appreciate if you could consider three key points: (1) as far as we can tell, no generalization measure used in modern deep learning accounts for multimodal posteriors, so it may be unfair to hold this against ED; (2) ED does not \\u201cfail\\u201d for multimodal posteriors\\u2026 indeed, all of the neural network posteriors in our experiments would be multimodal, but ED still does a relatively good job of model comparison. This is the case for the same reason it is possible to train a neural network with SGD, despite multimodality, and reliably find reasonable generalization. While the posterior is multimodal, most of the modes easily discoverable by SGD provide a similar level of performance, albeit sometimes complimentary solutions; (3) a unimodal measure is applicable for model comparison to an overwhelming majority of models in practice, which are typically trained with optimization (which converges to a single mode even if the loss surface is multimodal), or unimodal marginalization. A multimodal measure would mostly only be applicable if we are comparing between models that are a result of multi-basin marginalization. Our intention is to show that effective dimension can be informative for comparing models which have been trained in a standard way, which is typically optimization or unimodal marginalization.\", \"While we agree the theorems are relatively straightforward, we do not believe that should be held against the paper. There are many contributions in the paper, and in fact we do not reference the theorems as core contributions of the paper in the introduction. Moreover, the theorems do combine synergistically with the content: we show that many of the results that can be proven for linear models or generalized linear models hold for neural networks. We primarily use these theorems as stepping stones towards gaining insights into the behavior of large neural networks. We would also posit in this context that being \\u201cstraightforward\\u201d is arguably an advantage, and that relevance and impact are more important than complexity in results.\", \"We appreciate that you (and other reviewers) noted that the paper is generally well-written. While we agree several stylistic decisions can have both pros and cons, we do note that the decision to have figures 1 and 2 early-on in the paper was carefully considered, rather than arbitrary. The rationale was to have some of the key results appear early, so that a reader could become quickly engaged with the paper, and have a clear sense of what it\\u2019s about --- it sets up much of the material that follows. At the same time, it\\u2019s hard to have all the details in an introduction, and so we provided further detail later in the paper --- including additional comparisons related to these results.\"]}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thank you for your detailed, thoughtful, and supportive review! You have brought to light a number of helpful ways we can bring more clarity to the paper -- which we have incorporated in an updated version.\\n\\nWe want to emphasize that we believe our exploration of effective dimensionality makes a very timely and significant contribution, (1) for the first time tracking double descent with a generalization metric, (2) exploring generalization as a function of depth (which has largely been ignored, with width instead as a focus, despite the practical significance of depth for generalization), (3) providing insights into effective dimension as model compression and links to Bayesian posterior contractions, (4) providing important contributions to the pervasive parameter counting narrative in contemporary deep learning; (5) explaining why subspace compression methods in deep learning are effective through the lens of effective dimensionality (these methods have been highly mysterious despite their practical success) by exploring properties of function-space; (6) showing that effective dimension actually provides a very competitive generalization measure relative to several generalization measures that have been isolated as high performing in recent literature. We hope you can consider the importance, timeliness, and synergy of these contributions, in considering your final assessment. \\n\\nWe appreciate that you (and other reviewers) noted that the paper is generally well-written. While we agree several stylistic decisions can have both pros and cons, we do note that the decision to have figures 1 and 2 early-on in the paper was carefully considered, rather than arbitrary. The rationale was to have some of the key results appear early, so that a reader could become quickly engaged with the paper, and have a clear sense of what it\\u2019s about --- it sets up much of the material that follows. At the same time, it\\u2019s hard to have all the details in an introduction, and so we provided further detail later in the paper --- including additional comparisons related to these results. To improve clarity, we now signpost additional related material that comes later in the text.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"Thank you for the thoughtful remarks. Thank you also for the specific questions, which we believe are we able to directly address below. We would value it if you would consider updating your assessment in light of our response.\", \"relationship_between_hessian_and_posterior_covariance\": \"We offer a view of the Hessian of the loss that is compatible with *both* Bayesian and maximum likelihood frameworks. In both cases we are considering a posterior which is proportional to a likelihood times a prior. In both cases the effective dimensionality describes the number of parameters determined by the data in terms of the number of sharp directions in the posterior. In the maximum likelihood framework the negative log posterior acts as the loss surface for optimization, and the prior acts as a regularizer. In the Bayesian case the parameter determination by the data can be understood as contraction of the posterior from the prior, which we show is consistent with our interpretation of effective dimensionality.\\n\\nMoreover, in the case of Bayesian linear regression with a Gaussian prior, the Hessian of the maximum likelihood estimator \\\\hat \\\\beta is exactly the inverse posterior covariance matrix over the parameters, a fact we explicitly use in the proof of Theorem 2 in the Appendix (see Appendix F.2). This algebraic relationship has the very nice property that it makes the Laplace approximation to the marginal likelihood (Eq. 3) exact. Furthermore, we demonstrate empirically in Figure 5 (right) that the effective dimensionality of the posterior covariance of small Bayesian NNs acts in an inverse fashion (it decreases as the number of data points increases) to the effective dimensionality of the Hessian matrix (which increases as the number of data points increases). The empirical result suggests that for Bayesian NNs the Hessian of the posterior at the ML estimator is very closely related to the inverse posterior covariance matrix. \\n\\nTheorem 4.1: We agree that Theorem 4.1 in the paper does not imply that effective dimensionality should decrease as the number of model parameters grows. But we do see the theorem has harmonizing with our empirical results and the general narrative of the paper. This theorem serves to highlight an immediate failure of the approach of parameter counting: in the overparameterized setting there will be many directions in which the parameters have not been determined by the data. We are using the theorem as a stepping stone to show that our intuitions from linear models (i.e. undetermined parameter directions) should hold for neural networks --- which we do see happens in practice, with our experiments. \\n\\nWe would like to emphatically clarify, however, that the argument for using effective dimension as a proxy for generalization is not only empirical. Theorem 4.1 is one of many results in the paper. The reasoning for why effective dimension should be a good proxy for generalization is also given in section 4.3, where we show that lower effective dimension leads to better Occam factors and connects with a lower minimum description length. There are also many results in the literature connecting flatness with generalization --- arguing that flatter solutions, which by definition will have lower effective dimension, often correspond to better generalization. We will make this point more clear in the text [1,2,3,4]. But we have also added an ablation study, inspired by your comments in which we train a number of networks of varying widths on CIFAR10 with different learning rates and weight decays and compare effective dimensionality and test accuracy.\", \"figure_4\": \"This figure shows the effective dimensionality of the collection networks with near zero training loss from Figure 2 for a range of regularization parameters z. In the revised version of the paper, we\\u2019ve updated the figure caption to better explain the figure. This is an important result, which we included to be especially thorough empirically, showing that the qualitative comparison between models given by the effective dimension is fairly robust to settings of \\u2018z\\u2019.\\n\\n[1] Averaging weights leads to wider optima and better generalization, Izmailov et al. 2018\\n[2] A simple baseline for bayesian uncertainty in deep learning, Maddox et al. 2019\\n[3] On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima, Keskar et al. 2017\\n[4] Flat Minima, Hochreiter and Schmidhuber 1997\"}",
"{\"title\": \"Response to Reviewer 4\", \"comment\": \"Thank you for your review. There appears to be a key misunderstanding in your concern which we would like to clarify.\\n\\nWe emphasize that effective dimension can only be used for model comparison ***when we are comparing models of similar training loss***. Each model can then be viewed as providing a compression of the training data at the same fidelity, in which case the model that has the lower effective dimensionality (ED), and thus provides the better compression, will tend to generalize better. If we do not hold training loss constant, then it is possible a model could achieve a lower effective dimension simply by extracting less information in the training data, which would typically not be predictive of generalization. We can also see this perspective in our discussion of connections with the marginal likelihood in Section 4.3, where there is both a model fit and complexity penalty term. A model with low ED but poor model fit (e.g. training loss) will have a poor marginal likelihood. Two models with similar model fit but one with lower ED will have a better marginal likelihood, typically leading to better generalization.\\n\\nWe highlight this caveat about controlling for training loss for comparison in several places in the text. E.g., \\u201cwe see that once a model has achieved low training loss, the effective dimensionality, computed from training data alone, replicates double descent behaviour\\u201d (page 2); \\u201cfor models with low training loss (above the green partition), the effective dimensionality closely tracks generalization performance for each combination of width and depth\\u201d (page 2); \\u201cThe green curve separates models with near-zero training loss\\u201d (page 2); \\u201cAs the dimensionality of the parameter space continues to increase past the point where the corresponding models achieve zero training error,flat regions of the loss occupy a greatly increasing volume\\u201d (page 7); \\u201cIn the region of near-zero training loss, separated by the green curve, we see effective dimensionality closely matches generalization performance.\\u201d (page 7). \\n\\nWe have updated the text to more explicitly explain why effective dimensionality should generally be used to compare models with similar training loss, as in the second paragraph of our response here. If this condition is met, effective dimension certainly does provide a compelling alternative to parameter counting, and several state-of-the-art generalization metrics, which we can see in the results of Figures 1, 2, 5, and 7.\\n\\nWe believe this response fully addresses your concern, and would therefore appreciate it if you would consider substantially raising your score. Feel free to let us know if you have any further questions.\"}",
"{\"title\": \"Paper updates\", \"comment\": \"We thank the reviewers for helpful and encouraging comments! We have made the following updates to the paper, and also provide detailed responses to the individual reviewers in separate posts.\\n\\nClarifications\\n- We have clarified that the effective dimension should be used for model comparison when the models being compared both have similarly low training loss. Then the models can be viewed as low-loss compressions of the data --- and the model that provides the best compression, and hence has the lowest effective dimension, will tend to provide the best generalization. If there is high loss, or models with different training loss, then the level of compression is less relevant, because the models have not necessarily learned very much from the training data. This explains why effective dimension closely tracks generalization for both double descent and width depth tradeoffs in the region of low training loss (and much better than parameter counting, which suggests the opposite trend). \\n- We\\u2019ve updated the caption to Figure 4 to make it clear that each point is a model of varying width and depth from Figure 2, while the color on each point is a different parameter, z, for computing the effective dimensionality.\\n- We\\u2019ve made the distinction between effective dimensionality as a function of matrices and the model effective dimensionality clearer, and signposted that when we are talking about generalization performance we are referring to effective dimensionality of the Hessian of a trained model.\\n- We\\u2019ve fixed usage of the shape of the features matrix \\\\Phi throughout.\\nEmphasized that the Hessian is dependent on the dataset the model is trained on in Section 2.\\nWe\\u2019ve cleaned up the typos in the proof of Theorem F.1 in the Appendix.\", \"new_experiments\": [\"We\\u2019ve added Appendix Figure A.9, which shows function space homogeneity in directions of the Hessian which have small eigenvalues on the test set of CIFAR10 for a CNN model.\", \"We\\u2019ve also added Figure A.16, which demonstrates on a MLP that the number of eigenvalues used to compute the effective dimensionality of the Hessian is quite robust to the number of eigenvalues used.\"], \"additional_updates\": [\"We\\u2019ve appended the Appendix to the main pdf file.\", \"We\\u2019ve included discussion of PyHessian and of the existence of negative eigenvalues in Section 2.\"]}",
"{\"title\": \"Official Blind Review#4\", \"review\": \"summary:\\nThis paper provide a unified view of the generalization ability in the Bayesian deep learning framework through the effective dimensionality. The authors claim that some phenomenon in the deep learning such as generalization in #parameter >> #data settings, and double descent can be explained by the effective dimensionality. \\n\\nAlthough the theorem and experiments in the paper suggest that some of these properties can be explained by effective dimensionality, it is insufficient to convince that it substitutes other measures such as parameter counting. \\nFor example, in Figure 2 abd 7, the effective dimensionality shows a very different behavior from test loss and test error when the width is small.\\u3000In other words, effective dimensionality does not seem to account for the first descent in Figure 2 and Figure 7 (although it follows the second descent well). \\n\\ntypos\\n\\n- p.17 in MEASURING POSTERIOR CONTRACTION IN BAYESIAN GENERALIZED LINEAR MODELS\\n\\n\\t-- The numerator in the second line of equation (11): 1 - \\\\alpha^2(\\\\lambda_i + \\\\alpha^-2) -> \\\\alpha^2(\\\\lambda_i + \\\\alpha^-2) - 1?\\n\\t\\t\\n- p.18 in F.1 PROOF AND EXTENSIONS TO THEOREM 4.1\\n \\n -- \\\"...the posterior distribution of \\\\beta has an p-k directional subspace...\\\" -> \\\"...the posterior distribution of \\\\beta has an k-n directional subspace...\\\"?\\n \\n -- \\\"Therefore, the posterior covariance has p-n directions...\\\" -> \\\"Therefore, the posterior covariance has k-n directions...\\\"?\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting avenue, but requires improvements\", \"review\": \"This paper explores the effective dimensionality of the Hessian as a possible explanation for why neural networks generalize well despite being massively overparametrized.\\n\\nWhile I concur with the intuition, I think in the current state of the paper, some points could be improved and clarified.\\n\\n### Relationship between Hessian and posterior covariance\\n\\nWhile you mainly reason in the Bayesian framework about the posterior, it seems that networks in fig 1, 6, 7 are trained using ML. So why would the Hessian of the ML estimator relate to the covariance of the posterior?\\n\\n### Theorem 4.1\\n\\nTheorem 4.1 shows that even with $k \\\\gg n$ parameters, there are only $n$ directions in which the posterior covariance changes from the prior. But the rest of the discussion shows that the effective dimension actually decreases, which is not captured by your theorem. In this regard, I consider this theorem as an illustration of why the effective dimension does not increase with increasing number of parameters, a statement that is weaker than saying that the effective dimension actually decreases.\\n\\nTherefore, we can say that your argument for advocating in favor of using the effective dimension as a proxy for generalization is mainly empirical. Then I would have appreciated a more thorough ablation study, that would demonstrate that the correlation is still occuring while varying other hyperparameters.\\n\\n### Figures 4\", \"fig_4\": \"can you precisely state what is plotted, i.e. for a fixed $z$ why do we have several datapoints?\\n\\n### Conclusion\\n\\nAs already said in the beginning I really like the idea of effective dimension playing an important role in generalization. I however think that relationship between the hessian of ML estimator and the covariance of the posterior, as well as the empirical study, should be improved before this is published.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting insights into generalization in deep learning based on the effective dimensionality\", \"review\": [\"# Summary\", \"The paper applies the effective dimensionality (introduced by MacKay, Gull and others) to study the generalization properties of large probabilistic models. Effective dimensionality is the number of parameters determined by the data (derived from the curvature of the posterior at the MAP estimate), and shown to be more informative than simple parameter counting. After demonstrating the usefulness of the effective dimensionality, the authors study double descent observed when training deep nets of increasing width/depth. The authors argue that double descent is an artifact that can be understood by studying the effective dimensionality of the model. They take a detailed look at width-depth trade-offs using numerical experiments. Moreover, they compare the effective dimensionality with other generalization measures and find a superior performance.\", \"# Assessment\", \"Overall, I enjoyed reading the paper. Most of the time it is well-written and provides some new insights into questions concerning generalization in probabilistic models. So I'm tending towards acceptance, but there are several problems with the current version of the paper.\", \"## Pros\", \"Effective dimensionality is shown to be a useful quantity to understand generalization properties of probabilistic models in particular deep neural nets.\", \"Effective dimensionality helps us understand width-depth trade-offs in deep learning.\", \"Effective dimensionality is a metric for generalization solely based on the training data.\", \"## Cons\", \"Effective dimensionality rests on the Laplace approximation (of the log posterior) which fails for multimodal posteriors.\", \"Organization of material is suboptimal. For example, section 5 refers to figures 1 and 2, which already has been discussed in the Introduction. Occasional sloppiness in notation and wording.\", \"Theorems are rather elementary; I'm not sure whether they should be highlighted as theorems. Validity for neural nets is only hypothesized and demonstrated empirically. Any analytical results?\", \"# Comments / Questions\", \"The organization of paper a bit difficult to follow. Central results (Figs. 1 and 2) are already shown early in the paper and later explained in more detail...\", \"There is no explanation as to why the effective dimensionality decreases with increasing width/depth (as shown in Figs. 1 and 2). Why do we see a decrease in effective dimensionality?\", \"How stable is the calculation of effective dimensionality across different training runs?\", \"If you are only computing the 100 largest eigenvalues, the effective dimensionality will always be smaller or equal to 100. Why do you restrict the effective dimensionality to a maximum of 100 in most of your experiments?\", \"How much sense does effective dimensionality make for multimodal posteriors?\", \"The use of the notion \\\"effective dimensionality\\\" is sometimes confusing: On the one hand, it is a property of any positive semi-definite matrix (Eq. 2). On the other hand, most of the time \\\"effective dimensionality\\\" is implicitly understood as \\\"the effective dimensionality of the model\\\" (i.e. $N_{eff}$ of the Hessian of minus log posterior). I would prefer to use $N_{eff}$ when you talk about the effective dimensionality of a specific matrix and \\\"effective dimensionality of the model\\\" when you mean $N_{eff}$ of the Hessian of minus log posterior. An instance where your ambiguous use of \\\"effective dimensionality\\\" leads to confusion can be found on page 5: \\\"For Bayesian linear models, the effective dimensionality of the parameter covariance is the inverse of the Hessian\\\" -- it's not clear to me what you mean by that...\", \"Page 2: \\\"we expect models with lower effective dimensionality to generalize better\\\" -- why?\", \"Page 3: Right before eq. (1): You specify \\\"$y \\\\sim \\\\mathcal N(f=\\\\Phi^T\\\\beta, \\\\sigma^2)$\\\". What do you mean by \\\"$f=...$\\\"? Shouldn't \\\"$f=$\\\" be removed? Moreover, in the theorems and their proofs you work with the transpose of $\\\\Phi$ since the model is $y\\\\sim \\\\mathcal N(\\\\Phi\\\\beta, \\\\sigma^2 I_n)$... Also the prior \\\"$\\\\beta \\\\sim \\\\mathcal N(0, \\\\alpha^2 I_N)$\\\" is not consistent with the theorems and the appendix where $k$ (sometimes $p$) is used to indicate the number of parameters.\", \"Page 3: Right after eq. (1): \\\"where $\\\\lambda_i$ are the eigenvalues of $\\\\Phi\\\\Phi^T$, the Hessian of the log likelihood\\\". Strictly speaking $\\\\Phi\\\\Phi^T/\\\\sigma^2$ is the Hessian of minus log likelihood. Once again, please use one consistent model and notation in the main text and the appendix (either $y\\\\sim \\\\mathcal N(\\\\Phi^T\\\\beta, \\\\sigma^2 I_n)$ or $y\\\\sim \\\\mathcal N(\\\\Phi\\\\beta, \\\\sigma^2 I_n)$ -- I prefer the latter in which case the Hessian of minus log likelihood is $\\\\Phi^T\\\\Phi/\\\\sigma^2$ rather than $\\\\Phi\\\\Phi^T/\\\\sigma^2$...) and carefully adapt all expressions that are affected by your choice.\", \"Page 6, Equation 3: Your Occam factor scales with $1/\\\\sqrt{\\\\mathrm{det}(\\\\mathcal H_\\\\theta)}$. What happens if the Hessian is singular (which is bound to happen for overparameterized models)? Comparison with MacKay's definition reveals that you should replace $\\\\mathcal H_\\\\theta$ with $\\\\mathcal H_\\\\theta + I_k / \\\\alpha^2$, which resolves the trouble with singular Hessians of minus log likelihood.\", \"Also in other instances, you tend to focus on the Hessian of minus log likelihood, whereas MacKay looks at $A = - \\\\nabla\\\\nabla \\\\log p(\\\\theta|\\\\mathcal D, \\\\mathcal M) = \\\\mathcal H_\\\\theta + I_k/\\\\alpha^2$ (Hessian of minus log posterior / posterior curvature / inverse posterior covariance). I find this confusing. I think it would help to always clearly state, if you talk about the Hessian of minus log likelihood or the Hessian of minus log posterior.\", \"Appendix: \\\"In the overparameterized regime, $k > n$, with linearly independent features we have that has rank at most $k$\\\" (page 18). This is incorrect, the rank is at most $n$. In both proofs: Why not argue using the SVD of the feature matrix? If $\\\\Phi = U\\\\Lambda V^T$ with column-orthogonal matrices $U, V$ such that $U^TU=V^TV = I_r$ (where rank $r \\\\le \\\\min(n,k)$), we have $\\\\Phi\\\\beta = U\\\\Lambda V^T\\\\beta$. Use projectors $P=VV^T$ and $P_\\\\perp=I_k - VV^T$ to decompose $\\\\beta$ into contributions that affect the prediction, $P\\\\beta$, and perturbations that do not change the prediction, $P_\\\\perp\\\\beta$ (also: $\\\\|\\\\beta\\\\|^2 = \\\\|P\\\\beta\\\\|^2 + \\\\|P_\\\\perp\\\\beta\\\\|^2$). $P$ projects into an $r$-dimensional linear subspace, $P_\\\\perp$ projects into the orthogonal space (dimension: $\\\\max(n,k) - r$) of neutral perturbations: $\\\\Phi\\\\beta=\\\\Phi(P + P_\\\\perp)\\\\beta = \\\\Phi P\\\\beta$ since $\\\\Phi P_\\\\perp = 0$.\", \"# Minor\", \"Page 2: symbol $\\\\mathcal L$ (log likelihood) is not or only implicitly defined in the text\", \"Page 5, Fig. 5, left panel: The tick labels on the right axis ($N_{eff}$) are very small (ranging from 0 to 4). Is this correct?\", \"Page 6, Equation 3: It would be helpful to explain all symbols (i.e. $p(\\\\theta_{MP}|\\\\mathcal M)$ is the prior evaluated at the MAP estimate...)\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"A borderline paper with interesting insights?\", \"review\": [\"**Summary**: In this article, the authors revisited the idea of *effective dimensionality* as a complexity measure for large-scale machine learning systems, and in particular, modern deep neural networks. Theoretical arguments were provided for linear and generalized linear models (Theorem 4.1 and 4.2). Connections were made between the proposed effective dimensionality and the double descent phenomenon, width-depth trade-off, function-space homogeneity, and other generalization measures in the literature. Experiments on linear models as well as deep networks (ResNet18) were provided to support the effectiveness of the proposed metric.\", \"**Strong points**: The authors revisited the idea of *effective dimensionality* as a complexity measure for large-scale machine learning systems, and in particular, modern deep neural networks. Theoretical arguments were provided for linear and generalized linear models. Insightful discussions were made on the connection between the proposed effective dimensionality and the double descent phenomenon, width-depth trade-off, function-space homogeneity, and other norm- or flatness-based generalization measures. The paper is in general well-written.\", \"**Weak points**: The presentation of the article can be significantly improved. The contribution, from either a theoretical (Theorem 4.1 and 4.2 on Bayesian linear models with Gaussian prior, with generalized linear models in the appendix) or an empirical (ResNet18 on CIFAR-100) perspective, seems not enough for a clear accept.\", \"**Recommendation**: On account of the theoretical or empirical contributions of this work, I find this paper somewhat borderline. Nonetheless, according to the strong points I mentioned above and in particular, the interesting and novel insights offered by this paper into the understanding of deep neural nets, I'm more leaning toward an acceptance.\", \"**Detailed comments**:\", \"P3 Section 2 \\\"matrix of second derivatives of the loss, $H_{\\\\theta} = - \\\\nabla \\\\nabla_{\\\\theta} \\\\mathcal L(\\\\theta, \\\\mathcal D)$\\\": what does $\\\\mathcal D$ mean here?\", \"P3 Section 2.1 \\\"This increase in curvature of the loss that accompanies certainty about the parameters leads to an increase in the eigenvalues of the Hessian of the Growth in eigenvalues of the Hessian of the loss corresponds to increased certainty about parameters\\\": is this a **general** claim, how is this theoretically/empirically supported? And it is not clear, at least to me, how the same intuition built here extends to general cases as claimed by the authors below Figure 4 in P4.\", \"are the (Hessian) eigenvalues assumed to be all **positive** in the definition of effective dimensionality? This may not be the case for neural networks.\", \"\\\"Therefore, effective dimensionality explains the number of parameters that have been determined by the data\\\": at this point (of the article), it is not yet clear to me how the effective dimensionality defined above is connected to the data.\", \"P4 Practical Computations: there exists a library called \\\"PyHessian: Neural Networks Through the Lens of the Hessian\\\" that can perform many eigenspectrum-based computations of the Hessian of deep neural nets, which might help to conduct experiments beyond the first $100$ eigenpairs of the Hessian, though honestly, I have not tried it myself.\", \"Theorem 4.2 states the \\\"function-space homogeneity\\\" of a subspace of the Hessian, in the sense of the training prediction, how does this affect the test performance of the model?\", \"Section 5.1: \\\"tracks remarkably well with generalization \\u2014 displaying the double descent curve that is seen in the test loss\\\": this is not entirely true, the first (local) minimum of the effective dimensionality and of the test loss appear at a relatively different width. It seems to me that the proposed metric is more \\\"accurate\\\" for (nearly) interpolation models (i.e., models with zero or low training loss): this is also seen at the bottom of the left plot of Figure 2 where the effective dimensionality (with high training loss) is low, while it is not the case for test loss.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
x6x7FWFNZpg | Decentralized SGD with Asynchronous, Local and Quantized Updates | [
"Giorgi Nadiradze",
"Amirmojtaba Sabour",
"Peter Davies",
"Ilia Markov",
"Shigang Li",
"Dan Alistarh"
] | The ability to scale distributed optimization to large node counts has been one of the main enablers of recent progress in machine learning. To this end, several techniques have been explored, such as asynchronous, quantized and decentralized communication--which significantly reduce the impact of communication and synchronization, as well as the ability for nodes to perform several local model updates before communicating--which reduces the frequency of communication.
In this paper, we show that these techniques, which have so far largely been considered independently, can be jointly leveraged to minimize distribution cost for training neural network models via stochastic gradient descent (SGD).
We consider a setting with minimal coordination: we have a large number of nodes on a communication graph, each with a local subset of data, performing independent SGD updates onto their local models. After some number of local updates, each node chooses an interaction partner uniformly at random from its neighbors, and averages a (possibly quantized) version of its local model with the neighbor's model.
Our first contribution is in proving that, even under such a relaxed setting, SGD can still be guaranteed to converge under standard assumptions. The proof is based on a new connection with parallel load-balancing processes, and improves existing techniques by handling decentralization, asynchrony, quantization, and local updates, into a single framework, and bounding their impact.
On the practical side, we implement variants of our algorithm and deploy them onto distributed environments, and show that they can successfully converge and scale for large-scale neural network training tasks, matching or even slightly improving the accuracy of previous methods. | [
"distributed machine learning",
"SGD",
"decentralized algorithms",
"quantization"
] | Reject | https://openreview.net/pdf?id=x6x7FWFNZpg | https://openreview.net/forum?id=x6x7FWFNZpg | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"aqoNwZSQh75",
"8k2lKoNEbt",
"uOpOLAlWYvF",
"WYx49Df16t",
"5f2leNCKmq",
"81XRcZA1DlR",
"r-IEyf5Ungj",
"9UWnXgSHY-5",
"weinGFmrdW0",
"mp_d1DhkyYe",
"kv0J-YZvBxR"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040369538,
1606195464200,
1605385624463,
1605385426188,
1605385053174,
1605384904490,
1605384619214,
1603902927101,
1603867507365,
1603865887269,
1603593042826
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3506/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3506/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3506/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3506/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3506/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3506/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3506/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3506/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3506/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3506/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"The reviews were a bit mixed: on one hand, by combining and adapting existing techniques the authors obtained some interesting new results that seem to complement existing ones; on the other hand, there is some concern on the novelty and on the interpretation of the obtained results. Upon independent reading, the AC agrees with the reviewers that this paper's presentation can use some polishing. (The revision that the authors prepared has addressed some concerns and improved a lot compared to the original submission.) Overall, the analysis is interesting but the significance and novelty of this work require further elaboration. In the end, the PCs and AC agreed that this work is not ready for publication at ICLR yet. Please do not take this decision as an under-appreciation of your work. Rather, please use this opportunity to consider further polishing your draft according to the reviews. It is our belief that with proper revision this work can certainly be a useful addition to the field.\\n\\nSome of the critical reviews are recalled below to assist the authors' revision:\\n\\n(a) The result in Theorem 4.1 needs to be contrasted with a single machine setting: do we improve the convergence rate in terms of T here? do we improve the constants in terms of L and M here? What is the advantage one can read off from Theorem 4.1, compared to a single machine implementation? How should we interpret the dependence of (optimal) H on r and lambda_2? \\n\\n(b) The justification for $T \\\\geq n^4$ is a bit weak and requires more thoughts: one applies distributed SGD because n is large. What happens if T does not satisfy this condition in practice, as in the experiments?\\n\\n(c) Extension 1 perhaps should be more detailed as its setting is much more realistic than Theorem 1. One could use Theorem 1 to motivate and explain some high level ideas but the focus should be on Extension 1-3. In extension 2, the final bound seems to be exactly the same as in Theorem 1, except a new condition on T. Any explanations? Why asynchronous updates only require a larger number of interactions but retain the same bound? These explanations would make the obtained theoretical results more accessible and easier to interpret.\"}",
"{\"title\": \"Revision Submitted\", \"comment\": \"We thank the reviewers again for their feedback.\\n\\nWe submitted a significant revision, which addresses all the reviewer comments as per our individual replies. In the PDF version, the significant changes are marked in blue. We will continue to make minor revisions until the deadline.\", \"the_major_changes_are_the_following\": [\"We unified the variance-based analysis extension (now Extension 1) to also allow for non-i.i.d. data, as part of the same argument. This is a significant technical improvement in the revision.\", \"We discuss the algorithm's communication complexity, with and without quantization being applied.\", \"We explained the role of the $T \\\\geq n^4$ requirement and clarified that it significantly improves upon prior work on decentralized algorithms.\", \"We motivated the choice of graph topology via citations to the literature on supercomputing and cloud networks.\", \"We clarified the role of the quantization scheme of Davies et al. in our quantized algorithm, and why standard (unbiased) quantizers wouldn't work.\", \"We added running times vs. accuracy for ResNet18/ImageNet, and a breakdown of average time/batch vs node count for the various schemes.\", \"We re-wrote part of the introduction for concision and clarity.\", \"We made sure all notation is clear before it is used.\"]}",
"{\"title\": \"Individual response\", \"comment\": \"Thank you for your feedback. We address the main issues below, in order:\\n\\t\\n> 0. \\u201cThe algorithm simply combines many different existing techniques and does not lead to any substantial new development.\\u201d \\n\\nWe agree that our algorithm is a combination of previous techniques. \\nHowever, we would like to mention the following points: \\n\\n\\u25cf\\tFirst, our main theoretical contribution is on the analysis side: our technique is the first to be able analyze all four consistency relaxations (decentralization, asynchrony, quantization, and local steps) in conjunction, using a single type of argument. The fact that this a significant challenge is recognized by the community: please see, for instance, [\\u201cAdvances and Open Problems in Federated Learning\\u201d arXiv:1912.04977], Section 2.1.2, which lists these possible consistency relaxations and poses their joint analysis as a challenge. \\n\\n\\u25cf\\tSecond, we would argue that combining these existing techniques is not always trivial. One particularly tricky example is adding quantization, which is known to be challenging even in the basic decentralized setting (without asynchrony). \\n\\n\\u25cf\\tThird, results suggest that our method outperforms previous decentralized proposals in terms of accuracy-versus-time on practically-relevant models. \\n\\t\\nThank you for the detailed comments on the presentation, which we have addressed as follows. (We follow your numbering.)\\n\\t\\n*1.\\tWe have compressed the introductory paragraphs and put them in context, as well as substantially revised the introduction for clarity.*\\n\\n*2.\\tWe have defined n and T upfront formally.*\\n\\n3.\\tYou are right that centralized or synchronous settings do not require $T \\\\ge n^4$. \\nHowever, as discussed in the answer to AnonReviewer 3, first point, a non-trivial bound on T is required for mixing in the *decentralized* setting and our requirement is the least strict among existing methods (e.g. AD-PSGD requires $T\\\\ge n^6$, and SGP requires $T \\\\ge n d^2$). Please see Appendix B for a detailed discussion. \\n\\n*We have added a discussion on this in the body, and invite the reviewer to examine Appendix B for a detailed discussion of the assumptions and of the relation to prior work.* \\n\\n*4.\\tFixed.*\\n\\n5.\\tTheoretically, our algorithm should dominate previous decentralized proposals in terms of total communication steps to convergence and total communication cost. Practically, it dominates them in terms of time-to-accuracy (see Figure 1(a)). \\n\\n*We have added a graph of loss-vs-time for the Transformer example. Since the trends were identical to the BLEU graph, this is given in the Appendix.* \\n\\n6.\\tGood point. The measure is the total number of communication steps, but this is equivalent up to constants to the total number of gradient evaluations and to the total number of iterations, as we usually assume H to be a constant. \\n\\n*We have added a discussion of the convergence trade-off induced by $H$.*\\n\\n*7.\\tAlso a good point. \\nIn terms of the theoretical analysis, if we disregard quantization and local steps ($H = 1$), our bounds are equivalent to those of [Lian et al., 2017].*\\n\\n*In practice, our algorithm is superior to previous proposals in terms of total communication cost, everything else being equal, especially at high node counts. For an illustration, please see Figure 2(b). Our algorithm has similar or better accuracy for the same number of gradient evaluations relative to previous proposals, and its cost per communication step is significantly lower than SGP or even AD-PSGD. The same point is made in Figure 1(b)--please see the results at 64 nodes.* \\n\\n*8.\\tand 9.: We apologize for these issues, which we have fixed in the revision.*\"}",
"{\"title\": \"Individual response\", \"comment\": \"[*We mark updates to the response in italics.*]\\n\\nThank you for your feedback. We address the main issues below, in order: \\n\\n>1.\\t\\u201cAn $r$-regular graph is required.\\u201d\\n\\nThis assumption faithfully models supercomputing and cloud networks, which tend to be densely connected and low-diameter. \\nFor example, Dragonfly topologies are very popular in supercomputing networks, and are regular. \\n\\n*We have added references motivating this modeling choice in the revision. More generally, we believe our technique can be used to analyze the process on general graphs, which we leave for future work.* \\n\\n>2.\\tThe benefit of using the quantization scheme of [Davies et al.]\\n\\n**Short answer:**\\n\\nWe use [Davies et al.] because it is the only quantization scheme where the error depends on the *distance* between its inputs, and not on the *norm* of its inputs. This is a critical issue when quantizing models, since we do not have a bound on their norms (as opposed to quantizing gradients, for which we could use the second-moment bound), but we do have a bound on their difference (via Gamma). \\nEven so, we have to be very careful in the parametrization of this quantization scheme to achieve convergence. \\n\\n*We have added a specific discussion on this point in the revision.* \\n\\n**Longer answer:**\\n\\n[Davies et al] allows us to bound the error caused by quantization by $\\\\|X_t^i-X_t^j\\\\|^2$ (if we quantize $X_t^i$ and send it to $j$), which in turn can be bounded naturally via our bound on $\\\\Gamma(t)$. (Please see Appendix G, analysis outline paragraph for a more detailed overview.) \\nStandard quantization schemes, such as QSGD, bound the error by $\\\\|X_t^i\\\\|^2$. Using such a scheme, it will be difficult to show the convergence, since $X_t^i$ is the model and we do not have any guarantees on its absolute norm: we just know that models are not far from each other, but they could be arbitrarily large, which would lead to arbitrary error. \\n\\n> \\u201cThe effect of quantization on convergence and communication cost.\\u201d\\n\\n\\n*We have added a discussion about the communication cost and the effect of quantization. In short, the convergence bounds stay exactly the same as in Theorem 4.1. (See Theorem G.1 in the Appendix).*\\n\\n> 3.\\t\\u201cAlgorithm 2 is not clear.\\u201c\\n\\nWe apologise for the lack of clarity, avg is indeed not needed in this case. $i$ and $j$ are the nodes which interact at step $t$. $X^i-S^i$ is the local steps node $i$ performed (initially the model had value $X^i$ and after local steps value is $S^i$). $X^i$ gets averaged with an estimate of model $X^j$ (denoted by ${X^j}\\u2019$) and only after that we apply local steps.\\n\\n> 4.\\t\\u201cEach time an edge is activated and the two nodes connected through the edge are updated. Therefore, there is still synchronization in Alg. 1.\\u201d \\n\\nExactly! This is precisely the limitation we remove in Extension 2 (Non-blocking averaging), which allows a method to just \\u201cpush\\u201d its model update to its communication partner in a non-blocking way, and to move on to the next iteration. \\n\\n> \\u201cIs it possible to update one node based on the results from multiple connected nodes?\\u201d \\n\\nThis is possible, but the node would have to complete its local steps corresponding to its first interaction before the second interaction partner may communicate with it.\"}",
"{\"title\": \"Individual response\", \"comment\": \"A.\\tWeak points (continued).\\n\\n\\n5.\\t\\u201c$H^2$ term in the convergence bound.\\u201d \\n\\nThis is a good point. The $H^2$ term usually comes from using Cauchy-Schwarz in order to bound second moment of sum of $H$ gradients (so that we upper bound the potential Gamma, which measures the disbalance between local models), which would not be needed if $H$ goes to infinity and nodes never communicate. However, in this case it is not clear that it is possible to provide any guarantees on the convergence of the mean $\\\\mu_t$ of the models in the non-convex case. \\n\\n6.\\t\\u201cTheorem 4.2 requires, $T=O(*)$.\\u201d \\n\\nThank you for pointing this out, $O$ should be replaced with $\\\\ge$ and the Theorem will work.\\n\\n7.\\t\\u201cDefinition of $T$ and replacing $T$ with $Tn$.\\u201d \\n\\nWe will be more clear about the definition. $T$ is the total number of interactions between two nodes. It can be replaced by $\\\\Omega(T_{parallel}n)$: If we look at the interactions ordered linearly by the time when they occur, we can split them in $T_{parallel}$, consecutive chunks, where each chunk contains $\\\\Omega(n)$ operations and all operations within a chunk happen in parallel. (This transformation to parallel time is standard in gossip and population models.)\\n\\n*We have added a clarification discussion on this point.*\\n\\nB. Further Questions.\\n\\n1.\\t\\u201cMerging section I with theorem 4.1.\\u201d \\nWe merged section I with theorem 4.2 since , Theorem 4.1 uses second moment bound and $\\\\rho$ does not appear.\\nIn the case of theorem 4.2 , the reviewer is correct: there is a term with $\\\\rho^2$ (we replace $\\\\sigma^2$ with $\\\\sigma^2+4\\\\rho^2$).\\n\\n*Thank you for this nice suggestion, a version of which we implemented in the revision.*\\n\\n2.\\t\\u201cLemma F.3 is confusing, $\\\\Gamma(t)$ should decrease with $t$.\\u201d\\n\\nThe purpose of lemma F.3 is to show that local models of nodes do not diverge, as $t$ increases. For this, it is not required that $\\\\Gamma(t)$ decreases with $t$. We could indeed use diminishing step sizes, but it would not necessarily improve the convergence bound and it also would cause additional overhead of coordinating step sizes between the nodes (We would have to make sure that step sizes of nodes do not differ by too much).\\n\\n\\n3.\\t\\u201cCan you also show the run time plot for ResNet?\\u201d\\n\\n*Yes, we have added this to the revision.*\\n \\nC. Optional improvements.\\n\\n[*We have clarified all these points.*]\\n\\nWe thank the reviewer for the provided suggestions. We will address them as follows:\\n\\n1.\\t\\u201c$\\\\frac{r}{\\\\lambda_2} \\\\ge 1$.\\u201d\\n\\nWe would like to point out that in the case of the fully connected graph $r=n-1$.\\nand $\\\\lambda_2=n$, we will add this example to the discussion.\\n\\n2.\\t\\u201c$\\\\tilde h_i^s$ depends on $\\\\tilde g_i$.\\u201d\\n\\nWe apologise for not being more precise, we skipped superscript in the case of \\n$\\\\tilde g_i$, This would make $\\\\tilde h_i^s$ to depend on $\\\\tilde g_i^0, \\\\tilde g_i^1, \\u2026, \\\\tilde g_i^{s-1}$ . \\n\\n3.\\t\\u201cExtra \\u2018-\\u2019.\\u201d\\n\\nThank you for pointing this out, we will correct it.\\n\\n4.\\t\\u201cMissing $(n-2)/n$ factor.\\u201d\\n\\n$(n-2)/n$ factor in front of the dot product disappears because of the observation before equation (18) ($(n-2)/n$ becomes 1, since $-2/n$ contributes towards the term which is equal to 0).\"}",
"{\"title\": \"Individual response\", \"comment\": \"[*The response was updated to reflect the contents of the revision. The modifications are described in italics.*]\\n\\nThank you for your feedback. We address the main issues below, in order:\\n\\nA.\\tWeak points. \\n\\n1.\\t\\u201cThe $T \\\\geq n^4$ bound on the total number of steps.\\u201d \\n\\nPlease note that, intuitively, a non-trivial bound on T is necessary in the decentralized case since the averaging process has to \\u201cmix\\u201d in order to transfer model information. As such, all previous algorithms require such a bound. \\nSpecifically, in the case of [Lian et al, 2017, AD-PSGD], this bound is $T \\\\geq n^6$, whereas [Assran et al, SGP] requires $T \\\\ge n d^2$, where $d$ is the *dimension* parameter, which is much larger than n in our applications. From this point of view, our conditions are the least restrictive, and they do hold in the practical setup we consider (For our ResNet/ImageNet experiments, T \\\\ge 170K, and n^4 = 65K.) \\n\\n*We have added a discussion specifically on this point as part of our main theorem.\\nFor a very detailed discussion, please also see Appendix section B, which presents a detailed comparison with prior work in terms of assumptions.* \\n\\n2.\\t\\u201cStep size requires the knowledge of the number of total steps.\\u201d \\n\\nThis is a common assumption in this setting, see e.g. [D-PSGD, AD-PSGD, SGP]. Additionally, nodes could simply fix $T$ so that they know the learning rate, and run the algorithm so that the total number of interactions is at least $T$, but at most $2T$ . This, for example, can be done by stopping the algorithm after the first time *some* node reaches $2T/n$ interactions. A simple probabilistic argument will ensure that the mentioned bounds hold for the *total number of interactions*. This modification would only change the constants in the final convergence bounds. \\n\\n3.\\t\\u201cSampling from global data.\\u201d \\n\\nAs the reviewer noticed, we did provide an analysis without this assumption in the Appendix. \\nTo avoid handwaving we rewrote Theorem 4.2 with non-i.i.d data distribution in mind.\\nThe main difference, which changes convergence bounds, is in the proof of Lemma H.3.\\nThe key change is following.\\nWith i.i.d data we are bounding $\\\\sum_{i=1}^n \\\\| \\\\nabla f(\\\\mu_t)-\\\\sum_{j=1}^n \\\\nabla f(X_t^j)/n\\\\|^2$\\nby $n \\\\| \\\\nabla f(\\\\mu_t)-\\\\sum_{j=1}^n \\\\nabla f(X_t^j)/n\\\\|^2 = 1/n \\\\|\\\\sum_{j=1}^n (\\\\nabla f(\\\\mu_t)-\\\\nabla f(X_t^j)) \\\\|^2 \\\\le L^2 \\\\Gamma_t$ where we used Cauchy-Schwarz and L-smoothness of f.\\nwith non-i.i.d data we need to bound $\\\\sum_{i=1}^n \\\\|\\\\nabla f_i(\\\\mu_t)-\\\\sum_{j=1}^n \\\\nabla f_j(X_t^j)/n\\\\|^2$.\\nWe use that it is equal to $\\\\sum_{i=1}^n \\\\| \\\\nabla f_i(\\\\mu_t)-\\\\nabla f(\\\\mu_t)+\\\\sum_{j=1}^n \\\\nabla f_j (\\\\mu_t)/n-\\\\sum_{j=1}^n \\\\nabla f_j(X_t^j)/n\\\\|^2$. After using Cauchy-Schwarz $\\\\sum_{i=1}^n \\\\|\\\\nabla f_i(\\\\mu_t)-\\\\nabla f(\\\\mu_t)\\\\|^2$ can be bounded by \\nvariance term $\\\\rho^2n$ and $ \\\\sum_{i=1}^n \\\\|\\\\sum_{j=1}^n \\\\nabla f_j (\\\\mu_t)/n-\\\\sum_{j=1}^n \\\\nabla f_j(X_t^j)/n \\\\|^2$ can be \\nbounded as in the case of i.i.d data since local functions $f_i$ are L-smooth as well.\\n \\n4.\\t\\u201cThe Benefit of local steps is not clear.\\u201d \\n\\n*One key benefit of local steps is practical, since it reduces average communication cost. However, there is also a theoretical benefit: \\nthe first term in the bound of Theorem 4.1 gets divided by $H$, which means that the algorithm does take advantage of all local steps. At the same time, local steps do increase the second \\\"variance\\\" term in the bound. Thus, the exact benefit-versus-costs analysis will depend on the exact problem parameters. We have added a detailed discussion of this point after the statement of Theorem 4.1.*\"}",
"{\"title\": \"Individual response\", \"comment\": \"[*This response was slightly updated to reflect the contents of the revision.*]\\n\\nThank you for your feedback! We address the main issues below, in order:\\n\\nA.\\tPoints on significance and clarity\\n \\nWe thank the reviewer for these suggestions, which we will address carefully in the revision. In short: \\n1.\\t\\u201cmeaning of decentralized.\\u201d\\n\\nIndeed, we mean decentralized model updates, in the sense that each node has its own (possibly different) version of the model. *We have clarified this in the introduction and throughout the paper.*\\n\\n2.\\tWe will indeed define $T$ and $H$ at the very beginning.\\n\\n3.\\t\\u201cwhere and when quantization is applied.\\u201d \\n\\nWhenever nodes $i$ and $j$ communicate at step $t$ , instead of model $X_t^i$ node $j$ receives a quantized version of it (or alternatively reads quantized version from the shared buffer to which $i$ wrote a quantized version earlier). We provide the full algorithm in Appendix G. \\n\\n*We dedicated a paragraph to explaining the impact of quantization on communication cost, and the relationship to prior work.* \\n\\nB.\\tRemarks on theoretical analysis\\n\\n1.\\t\\u201cSecond moment of the last obtained model.\\u201d \\n\\nThis is a great question, but unfortunately it is not known whether one can provide such a bound on the second moment of the last model even in the classical case of non-convex, single-machine SGD. The guarantees we provide are standard in this setting (see for instance the work of [Lian et al., 2017, 2018] as referenced in our paper). \\n\\n2. Communication complexity:\\n\\n*We provided an accounting of total communication complexity with and without quantization in the revision. This is in the discussion of the main theorem, as well as in the discussion of the quantization extension.*\"}",
"{\"title\": \"simple and effective distributed SGD, with asynchronous, decentralized and reduced communication extensions\", \"review\": \"### Summary\\nThe paper proposes and analyses a distributed learning algorithm for training with Stochastic Gradient Descent a global model on a regular graph, that allows for local and asynchronous gradient updates. Nodes continuously update their local models $X^i$ by gradient descent, while they communicate with their peers (a peer at a time) and update their local model with the pair model average $\\\\frac{X^i + X^j}{2}$. Three extensions of the algorithm are also proposed to relax different constraints, while maintaining the convergence guarantees:\\n1. synchronous updates and decentralized data: if the number of local gradient updates $H_i$ before an edge update is constant, convergence guarantees hold for decentralized data, as long as partitions are i.i.d. from the original distribution;\\n1. asynchronous updates: the number of local gradient updates $H_i$ can vary between nodes and between every edge update;\\n3. reduced communication: model exchanges can be quantized to reduce communication complexity.\\nExperiments in the distributed setting are carried out for image classification and speech recognition, showing that the algorithm is generally able to achieve performance comparable to a model trained in the centralized setting at increased execution time, but faster than state-of-the-art distributed SGD methods.\\n\\n### Significance and clarity\\nContributions are significant and novel, to the best of my knowledge. They consider several settings, which are all theoretically founded. However, the paper is generally hard to follow, also because it has many contributions that are cited in the main text but deferred to the appendix. Still, it would help to clarify the following points from the beginning:\\n1. what the authors mean by decentralized, later explained as decentralized model updates, but centralized/distributed data for the experiments;\\n2. define $T$ (global number of edge updates) and $H$ (number of local updates in between edge communication);\\n3. where and when quantization is applied and why it helps in reducing communication complexity in the main text.\\n\\n### Remarks on theoretical analysis\\nTheorem 4.1 shows that the average second moment of the loss gradient evaluated at the average model $\\\\mu_t$ is bounded and decreases with $T$, proving that the model updates converge to a local minimum. This bound however stands for the average of all models obtained at each global step $t$, meaning that it is not necessarily a tight bound for the second moment of the last obtained model, which is the bound we are ultimately interested in.\\nIt would be also interesting to report communication complexities, with and without quantization, and compare them to state-of-the-art methods.\\n\\n### UPDATE\\nI thank the authors for addressing my concerns and confirm my initial rating.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Proof issues.\", \"review\": \"# Contributions:\\n1. This paper analyzes the convergence of decentralized SGD with asynchronous updates, quantization and local updates, which is novel and challenging.\\n\\n2. The proposed algorithm requires significantly less communications to converge.\\n\\n3. The authors have done extensive analysis of the convergence under different settings with detailed proofs.\\n\\n4. The authors have done some large-scale experiments and show their algorithm performs great in practice.\\n\\n\\n# Strong points:\\n\\n1. The authors have done concrete non-trivial analysis.\\n\\n2. The algorithm is very general, several existing algorithms can be its special cases by different choice of parameters.\\n\\n3. The experiment section provides a large amount of empirical evidence.\\n\\n\\n# Weak points:\\n\\n1. Assumptions are too strong for Theorem 4.1 and 4.2: \\n\\n\\t- Assuming each node can sample from global data is too strong. Section I removes this assumption but without highlighting key steps.\\n\\n\\t- Step size requires the knowledge of the number of total steps.\\n\\n\\t- Number of total steps needs to be larger than $n^4$. Even nodes don't communicate, the algorithm should still converge because the global sampling.\\n\\n2. The benefit of local steps is not clear. For example, if we optimize the convergence rate in Theorem 4.1 over $H$, the best choice is $H = \\\\Big(\\\\frac{\\\\lambda_2^2}{r^2} \\\\cdot \\\\frac{f(\\\\mu_0) - f^*}{L^2 M^2} \\\\Big)^{1/3}$. That is, the optimal $H$ is smaller when $r$ is larger.\\n\\n3. The $H^2$ term in Theorem 4.1 and 4.2 may not be good enough. If set $H \\\\to \\\\infty$, then this bound should reduce to the single-machine SGD. However, the $H^2$ term will go to $\\\\infty$.\\n4. Theorem 4.2 requires $T \\\\sim O(*)$. Does it work if $T$ is greater?\\n\\n\\n5. Definition of $T$ is confusing.\\n\\n6. Arguments for acceleration is not convincing. The algorithm only have one pair of nodes communicate, it's not clear how to replace $T$ with $nT$.\\n\\n\\n# Recommendation: \\n\\nWeak reject. As of the current version, the proofs need to be improved. However, I believe the authors can improve in the next version.\\n\\n\\n\\n# Further questions:\\n\\n1. Is it possible to merge Section I with Theorem 4.1 or show the proof? I think there will be one term that depends on $\\\\rho^2$. When $\\\\rho^2 = 0$, Section I will reduce to Theorem 4.1.\\n\\n2. Lemma F.3 is confusing. I think $\\\\Gamma_t$ should decrease with $t$, or use diminishing step size $\\\\eta_t$ to control this term. Then there's no need to set $\\\\eta \\\\sim \\\\frac{1}{\\\\sqrt{T}}$.\\n\\n3. Can you also show the run time plot for ResNet?\\n\\n\\n# Optional improvements:\\n\\n 1. It may be better to remove some small terms to make rate more clearer. For example,\\n\\t - For Theorem 4.1, use $1 \\\\leq \\\\frac{r^2}{\\\\lambda_2^2}$ can get rid of the constant $1$.\\n\\t - For (14) and (19), use $\\\\frac{r}{\\\\lambda_2} \\\\leq \\\\frac{r^2}{\\\\lambda_2^2}$ to get rid of the first order term.\\n\\n2. The 3rd equation in Section D, $\\\\tilde h_i^s$ also depends on $\\\\tilde g_i$, which is not reflected.\\n\\n3. The 1st equation in Section E has an extra '-'.\\n\\n4. Is the coefficient $\\\\frac{n - 2}{n}# in Eq (18) missing?\\n\\n# Update\\n\\nThanks for the authors to address my questions. However, if the analysis can not explain why more local updates can reduce communications, I would not recommend to accept.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"several unclear places; strong assumption on the graph\", \"review\": \"This paper considers several techniques to minimize decentralized cost for training neural network models via stochastic gradient descent (SGD). These techniques include asynchronization, local updates, and quantized communication. Theoretical convergence analyses are provided, and numerical experiments are shown.\", \"strength\": [\"The provided convergence rate is well-separated and well-explained, though the reviewer did not check the correctness of all the proofs.\", \"Combining these techniques into decentralized SGD is new to the best of the reviewer's knowledge.\"], \"weakness\": [\"The number of graphs satisfying the property is very limited. It requires an r-regular graph. That is, the number of edges connected to one node is the same for all nodes. This condition is very difficult to satisfy in applications. Therefore, the application would be limited too.\", \"The quantization part is limited comparing to the other two parts. What does the effect of quantization on the convergence rate and the communication cost? What is the benefit of using the quantization method in Davies et al. (2020)?\", \"In the proposed algorithm, each time an edge is activated and the two nodes connected through the edge are updated. Therefore, there is still synchronization in Alg. 1. Whether is it possible to update one node based on the results from multiple connected nodes (i.e., one node is activated)?\", \"Algorithm 2 is unclear. 'avg' is computed but not used. What are j' and 'i''?\", \"## Update\", \"The authors' response addresses some concerns, and I would like to keep the initial scores.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Official Review 2\", \"review\": \"Summary: This paper combines the existing scaling techniques to reduce the communication cost of distributed SGD among a large number of computing nodes. These techniques include asynchronous, decentralized, or quantized communication. The authors prove that this combined algorithm converges to a local optimal point. In the experiments, this algorithm also successfully converges and scales for big data. The authors claim that this is the first work to consider decentralization, local updates, asynchrony, and quantization in conjunction.\\n\\nOverall, the contribution of this paper is relatively marginal. The algorithm simply combines many different existing techniques and does not lead to any substantial new development. Below are some comments and questions. \\n\\n(1) The first two paragraphs of the introduction look wordy. They introduced the distributed SGD problem and listed the scaling techniques as well as the relevant literature, but the meaning of these techniques is unclear. How these techniques are applied and combined is also unclear. \\n\\n(2) The meaning of n and T are not formally defined. \\n\\n(3) In Theorem 4.1, the assumption that T>=n^4 (n^4 can be very large) is the disadvantage of this algorithm because the same convergence rate O(1/sqrt(T)) has been achieved without such assumption in some distributed settings, including plain distributed SGD, federated average, etc.\\n\\n(4) The claim in the abstract that the new algorithm can converge to local minima is not supported, since the theorems only imply gradient convergence. \\n\\n(5) In the theoretical part, I did not see in which measure does this new algorithm excel the existing ones. The authors should clarify this. In the experiments, the objective function value of interest is not compared. \\n\\n(6) On page 2, the authors said \\u201cSwarmSGD has a \\u0398(n) speedup in the non-convex case, matching results from previous work which considered decentralized dynamics but which synchronize upon every SGD step.\\u201d What is the measure, is it the number of communications, local SGD iterations or gradient evaluations? \\u201cMatching results\\u201d can be interpreted as equal to the previous rate, which seems to contradict with \\u0398(n) speedup. Please clarify this. \\n\\n(7) In the contribution part, the authors mention that their new algorithm has lower average synchronization cost per iteration but more iterations in the experiments, how about the total synchronization cost? \\n\\n(8) The authors use multiple variables to denote the number of nodes, including n, P and m. Please use only one. \\n\\n(9) The space around the section captions is too narrow. This is not suggested in general.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
6c6KZUdm1Nq | Regression from Upper One-side Labeled Data | [
"Takayuki Katsuki"
] | We address a regression problem from weakly labeled data that are correctly labeled only above a regression line, i.e., upper one-side labeled data.
The label values of the data are the results of sensing the magnitude of some phenomenon.
In this case, the labels often contain missing or incomplete observations whose values are lower than those of correct observations and are also usually lower than the regression line. It follows that data labeled with lower values than the estimations of a regression function (lower-side data) are mixed with data that should originally be labeled above the regression line (upper-side data).
When such missing label observations are observed in a non-negligible amount, we thus should assume our lower-side data to be unlabeled data that are a mix of original upper- and lower-side data.
We formulate a regression problem from these upper-side labeled and lower-side unlabeled data. We then derive a learning algorithm in an unbiased and consistent manner to ordinary regression that is learned from data labeled correctly in both upper- and lower-side cases. Our key idea is that we can derive a gradient that requires only upper-side data and unlabeled data as the equivalent expression of that for ordinary regression. We additionally found that a specific class of losses enables us to learn unbiased solutions practically. In numerical experiments on synthetic and real-world datasets, we demonstrate the advantages of our algorithm. | [
"regression",
"weakly-supervised learning",
"healthcare"
] | Reject | https://openreview.net/pdf?id=6c6KZUdm1Nq | https://openreview.net/forum?id=6c6KZUdm1Nq | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"h0rVqvk4aZ",
"9co-AG2dZC3",
"DJg2v3QajOg",
"mxi9z2dX14_",
"ZCR0-3oGH8w",
"eS0UgbeTf8V",
"s7YqFlE1sbt",
"19ZCQCxI_HR",
"vpljbq0QdC5",
"aOt6enQucgn",
"82cQtEXrp-Y"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040513541,
1606276834457,
1606276550312,
1606198329141,
1605879141442,
1605639361207,
1605638804598,
1605638571693,
1604494223510,
1603935899210,
1603544836501
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3505/AnonReviewer5"
],
[
"ICLR.cc/2021/Conference/Paper3505/AnonReviewer5"
],
[
"ICLR.cc/2021/Conference/Paper3505/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3505/AnonReviewer5"
],
[
"ICLR.cc/2021/Conference/Paper3505/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3505/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3505/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3505/AnonReviewer5"
],
[
"ICLR.cc/2021/Conference/Paper3505/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3505/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"The paper addresses regression in a weakly supervised setting where the correct labels are only available for examples whose prediction lie above some threshold. The paper proposes a method using a gradient that is unbiased and consistent.\", \"pros\": [\"Problem setting is new and this paper is one of the first works exploring it.\", \"The procedure comes with some unbiasedness and consistency guarantees.\", \"Experimental results on a wide variety of datasets and domains.\"], \"cons\": [\"Novelty and technical contribution is limited.\", \"Motivation of the problem setting was found to be unclear.\", \"Some gaps in the experimental section (i.e. needing the use of synthetic data or synthetic modifications of the real data).\", \"Overall, the reviewers felt that as presented, the paper did not convincingly motivate the proposed upper one-sided regression problem as important or relevant in practice, which was a key reason for rejection. The paper may contain some nice ideas and I recommend taking the reviewer feedback to improve the presentation.\"]}",
"{\"title\": \"Positive Response\", \"comment\": \"On the other side, I admit contributions on empirical evaluation should be noted and raised my score. But I still think the paper need to be presented in an alternative way to be accepted, paying more attention on the empirical side.\"}",
"{\"title\": \"Sample Approximation for Eq.(13)\", \"comment\": \"Thank you for your further explanation. I agree the discussion is fine until Eq.(13). However, In Eq.(13), you need samples from the distribution E_{up} and E_{lo}. I think the key disagreement between me and the authors is that I consider these two distributions are not varying during training. For example, in Figure 1 (1), the missing observasion period is fixed when recording the data, and will not change when you learn using the observed data. Thus, choosing different samples by different f in Algorithm 1 line 7 to 10 is the same as using noisy samples to approximate the expectation over a distribution.\"}",
"{\"title\": \"Response to Reviewer 5\", \"comment\": \"Thank you very much for your additional reviews and suggestions.\\nIn our formulation, E and p(x, y), which produce data, are fixed. From the definition of Eq.(3), E_{up} and E_{lo} depend on a current f for both ordinary regression and our one-side regression. As you know, f is changing in SGD and we can see that E_{up} and E_{lo} are also changing in each step of SGD from the view point of Eq.(3), but E and p(x, y) are not changing. Thus, since Theorem 1 holds for each step of SGD, we can say that Eq.(13) is a mini-batch approximation for the unbiased gradient in Eq.(9).\\nAlso, we justified the effectiveness of Eq.(13) and Algorithm 1 with experimental results, as the other reviewers mentioned. In the main text, we also have explicitly described that Eq.(13) is an approximation for the gradient in Eq.(9).\\nThe point of our manuscript is to show that we can derive a gradient for the one-side regression in an unbiased and consistent manner to that for ordinary regression, that is Theorem 1, and to show we can develop a practical algorithm to implement that in a straightforward approximation, a mini-batch approximation.\"}",
"{\"title\": \"Data should be generated from f*.\", \"comment\": \"I admit that for Eq.(3), you can change for every possible function. However, data should be generated from an unknown underlying function f*. This means, one observed dataset corresponds to one single underlying function. Thus, when given a dataset, the underlying f* is accordingly fixed, thus the corresponding seperation for up data and lower data should also be fixed. Thereforer, I still consider changing this separation for each step is still an unjustified approximation.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"We thank for your careful reviews and valuable suggestions for our paper.\\n\\n[Gradient of the form g(sgn(f(x)-y) is meaningful and common]\\nAs mentioned in the last paragraph in Section2.2, gradients of the form g(sgn(f(x)-y) appear in such as the least absolute regression, which works well and is a common regression method.\\n\\n[Why do the last two terms on the RHS of (4) need to be written as a difference?]\\nBecause we do not have any lower-side data, we need to write the loss with only upper-side data and unlabeled data. We rewrite Eq.(3), which requires both upper-side data and lower-side data, into Eq.(4), which requires only upper-side data and unlabeled data.\\n\\n[y can be noisy]\\nWe assume the existence of the noise in the loss function in Eq.(2), which means expected loss over the corresponding distribution.\"}",
"{\"title\": \"Response to Reviewer 5\", \"comment\": \"We thank for your careful reviews and valuable suggestions for our paper.\\n\\n[Justification for Eq.(13)]\\nEq.(13) is just a mini-batch approximation for the unbiased gradient in Eq.(9), it works well same to the ordinary mini-batch approximation. Also, we would like to note that the decomposition in Eq.(3) is for general regression problem not specifically ours. In Eq.(3), E_{up} and \\\\pi_{up} are originally changing depending on f and it is not our assumption or proposal.\\nAs shown in Theorem 1, for any f, the gradients in Eq.(8) and Eq.(9) are unbiased to and consistent with the gradient of L(f) in Eq.(3). It also means that for any E_{up} and the corresponding distribution for upper-side case, the gradient in Eq.(9) is unbiased and consistent. Consequently, changing E_{up} and the corresponding upper-side samples for every updates in the gradient descent with the current model does not affect the theorem.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"We thank for your careful reviews and valuable suggestions for our paper. We will update our paper based on your comments, such as the order of Eq. (4) and Eq. (5).\\n\\n[Experimental setting for synthetic data]\\nIt is impossible to obtain the upper-side and lower-side data exactly in real data. Thus, we conducted the experiments on synthetic data where we do not know which data should be upper-side or unlabeled (lower-side) to evaluate the feasibility of our method.\"}",
"{\"title\": \"Official Blind Review #5\", \"review\": \"Summary: This paper considers a regression setting in which the missing values are observed with lower values than the true values. Authors provided appealing application for this problem setting. They rewrote the risk and provided an unbiased gradient estimator. However, there is a gap between the estimator and the actual implementation, thus making the overall paper less convincible.\", \"main_concern\": [\"A gap exists between Eq. (8) and Eq. (13). In Eq. (8), the expectation is taken over the distribution of \\\"up\\\". This distribution, as well as \\\\pi_{up}, is fixed throughout training. Unlike PU classification, this essential information is not given in this problem setting. However, according to Eq. (13) and Algorithm 1, this distribution and \\\\pi_{up} change ever minibatch with the current model. I admit this is a pratical algorithm, but it differs substaintialy from the first half of the paper. To fill the gap, investigation on how Eq. (13) approximate Eq. (8) should be conducted at least.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Bypassing data corruption in upper one-side labeled data\", \"review\": \"The authors study the problem of training a regression model when only for a subset of the datapoints (those for which their label lie above the current model prediction) the correct labels are available. A few comments,\\n\\n1) It is unclear if the labels y can be noisy. I assume they can't be, because all the derivations seem to be under the assumption they are not. \\n2) The application of this setup is not clear to me. I think the authors would benefit from motivating it via other papers that study the same regression problem (if any), as opposed to citing other topics in motion sensor research. They would also benefit from writing down the existing work on classification more explicitly, and connecting it to the regression setup in the paper. It is unclear if the classification literature treats a similar (or exactly the same) problem in the classification setting and what is the hard part of translating these results into regression.\\n\\nThe solution to the problem proposed by the authors is quite simple. This would not be a downside if the motivation of the problem and the related work was established with more authority at the beginning of the problem. I am concerned that in the case of losses of the form g(sgn(f(x)-y), f(x)) the problem is not meaningful because in this case the learner only requires to know if f(x) < y or not. The initial problem the authors set to solve vanishes in this case. This means that Theorem 1 is not very informative. \\n\\nIn section 3.2, there is little explanation as to why using a \\\\rho multiplier in (13). This does not seem to be in accordance with Theorem 1. It is also unclear to me why do the last two terms on the RHS of (4) need to be written as a difference, when at the end gradients of an expectation of a loss of the form g(sgn(f(x)-y)), f(x)) over the unlabeled dataset can be computed directly. There doesn't seem to be any need of writing it as a difference. \\n\\nThe experimental evaluation is thorough.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"New problem setting and algorithm with limited technical contributions\", \"review\": \"In this paper, the authors address a new weakly supervised regression problem. In this problem setting, upper-side data (labeled above the regression line) and unlabeled data are provided. To solve this problem, the authors derive a learning algorithm in an unbiased and\\nconsistent manner to ordinary regression that is learned from data labeled correctly in both upper- and lower-side cases. Experiments demonstrate the advantages of the proposed algorithm.\", \"pros\": \"1.\\tTo the best of my knowledge, this paper is the first to solve the weakly supervised regression problem presented in the paper. I consider that it is the biggest advantage of this paper.\\n2.\\tThis paper proposes a consistent learning algorithm to solve the above problem.\\n3.\\tExperiments demonstrate the effectiveness of the proposed algorithm.\", \"cons\": \"1.\\tThe presentation of this paper needs to be improved. For example, I understand that in the introduction section, the authors try to justify that the weakly supervised regression problem (where upper-side and unlabeled data are available) is reasonable and could be encountered in real-world settings. However, I personally feel that the presentation is not very clear and I am not fully convinced. In addition, for the order of Eq. (4) and Eq. (5), I think it would be better to present Eq. (5) before Eq. (4), as Eq. (4) relies on Eq. (5).\\n2.\\tFor the proposed consistent algorithm, I would admit that it is novel to some degree, while the technical contribution of this algorithm is limited. It is worth noting that the proposed algorithm is adapted from the risk estimator of PU learning (Du Plessis et al., 2014; 2015). I think the only key contribution lies in Eq. (6), e.g., the authors show that instead of setting the value of ${\\\\tilde{y}}\\\\_{\\\\text{lo}}$, we can find the gradient only depends on the sign of $f(\\\\boldsymbol{x})-{\\\\tilde{y}}\\\\_{\\\\text{lo}}$.\\n3.\\tFor the experiments, it seems that the authors do not use a ground-truth regression line to separate the given data, and obtain upper-side and unlabeled data. Instead, they corrupt some selected data by setting their value to the minimum regression value. I feel that this practical operation does not accord with the proposed problem setting. Maybe we could use originally labeled data to obtain a well-trained regression line and then obtain the required upper-side and unlabeled data.\\n\\nIn summary, this paper proposed a novel problem setting and a novel learning algorithm, while the problem setting is not well justified and the technical contribution of the algorithm is limited.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
_mQp5cr_iNy | Adversarially Guided Actor-Critic | [
"Yannis Flet-Berliac",
"Johan Ferret",
"Olivier Pietquin",
"Philippe Preux",
"Matthieu Geist"
] | Despite definite success in deep reinforcement learning problems, actor-critic algorithms are still confronted with sample inefficiency in complex environments, particularly in tasks where efficient exploration is a bottleneck. These methods consider a policy (the actor) and a value function (the critic) whose respective losses are built using different motivations and approaches. This paper introduces a third protagonist: the adversary. While the adversary mimics the actor by minimizing the KL-divergence between their respective action distributions, the actor, in addition to learning to solve the task, tries to differentiate itself from the adversary predictions. This novel objective stimulates the actor to follow strategies that could not have been correctly predicted from previous trajectories, making its behavior innovative in tasks where the reward is extremely rare. Our experimental analysis shows that the resulting Adversarially Guided Actor-Critic (AGAC) algorithm leads to more exhaustive exploration. Notably, AGAC outperforms current state-of-the-art methods on a set of various hard-exploration and procedurally-generated tasks. | [
"actor",
"tasks",
"methods",
"definite success",
"deep reinforcement",
"problems",
"algorithms",
"sample inefficiency",
"complex environments",
"efficient exploration"
] | Accept (Poster) | https://openreview.net/pdf?id=_mQp5cr_iNy | https://openreview.net/forum?id=_mQp5cr_iNy | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"T9_6-DHaySo",
"XSp0nxeu7k",
"qA3VjpeKHzE",
"SXrOPxeXJaA",
"e4HT-wIBxz",
"9htLjuFTGhS",
"NOZyXQTbzsu",
"idtYLvxEKlF"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040494941,
1606233342220,
1605623840028,
1605623729778,
1605623229532,
1603836832371,
1603831862077,
1603629870641
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3504/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3504/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3504/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3504/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3504/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3504/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3504/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Poster)\", \"comment\": \"This work tackles to address the sparse reward problem in RL. They augment actor-critic algorithms by adding an adversarial policy. The adversary tries to mimic the actor while the actor itself tries to differentiate itself from the adversary in addition to learning to solve the task. This in a way provides diversity in exploration behavior. Reviewers liked the paper in general but had several clarification questions. The authors provided the rebuttal and addressed some of the concerns. Considering the reviews and rebuttal, AC and reviewers believe that the paper provides insights that are useful to share with the community. That being said, the paper will still immensely benefit with more extensive experimentation on standard benchmark environments like Atari, etc. Please refer to the reviews for other feedback and suggestions.\"}",
"{\"title\": \"General response\", \"comment\": \"We would like to thank all the reviewers again for their insights and feedback. We have taken all comments into consideration, we did our best to address all concerns raised, and we think that the revised manuscript considerably improves the thoroughness of the experiments and clarity of writing.\\n\\nOverall, we reworked Section 4 entirely to reflect the comments of R2 and R3: the high-level idea of AGAC, the elements of the algorithm, and its theoretical analysis were all reorganized for clarity. We incorporated a proof for the expression of the maximizer in the optimization problem of the Policy Iteration scheme for AGAC in Appendix F. We created additional tables that quantify the VizDoom and MiniGrid scores of all methods. As mentioned in our response to R2, we do commit to updating Fig. 2 and 3 with complete curves for all methods we compare against in the camera-ready version.\\n\\nWe have also added state visitation heatmaps in Fig. 6 to respond to the concerns raised by R1 and R2. These heatmaps visually assess the difference between the exploration strategy induced by our method and those of RIDE, RND, Count and a random uniform policy. In a nutshell, they show that our approach is the only one to reach the last (tenth) room in a complex reward-free task, which indicates that adversarially-induced diversity is good for exploration. Additionally, we have also enriched the set of tasks used to evaluate the performance of AGAC: the task MiniGrid-ObstructedMaze-2Q has been added to Fig. 3.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"We thank the reviewer for their detailed comments.\\n\\nIn AGAC, the objective function for the critic has indeed a natural origin. Adding the KL divergence to the critic target mirrors the action log-probability difference we add to the advantage, which can be interpreted nicely. Taking $\\\\lambda \\\\rightarrow 1$ for simplicity, under which the generalized advantage is $A_t = G_t - V_{\\\\phi_{\\\\text{old}}}(s_t)$, we can actually express the modified advantage as $A_{t}^{\\\\text{AGAC}} = G_t - \\\\hat{V}^{\\\\phi_{\\\\text{old}}}_{t} + c ( \\\\log \\\\pi(a_t | s_t) - \\\\log \\\\pi_\\\\text{adv}(a_t | s_t) - \\\\hat{D}_{\\\\mathrm{KL}}^{\\\\phi_\\\\text{old}}(\\\\pi(\\\\cdot | s_t)\\\\|\\\\|\\\\pi_\\\\text{adv}(\\\\cdot | s_t)) )$ with $G_t$ the observed return, $\\\\hat{V}^{\\\\phi_\\\\text{old}}_t$ the estimated return and $\\\\hat{D}_\\\\mathrm{KL}^{\\\\phi_\\\\text{old}}(\\\\pi(\\\\cdot | s_t)\\\\|\\\\|\\\\pi_\\\\text{adv}(\\\\cdot | s_t))$ the estimated KL-divergence (both are estimated components of the modified critic learned in AGAC). What this decomposition shows is that AGAC favors transitions whose actions are less accurately predicted by the adversary than the average action, i.e. $\\\\log \\\\pi(a | s) - \\\\log \\\\pi_\\\\text{adv}(a | s) \\\\geq \\\\hat{D}_\\\\mathrm{KL}^{\\\\phi_\\\\text{old}}(\\\\pi(\\\\cdot | s)\\\\|\\\\pi_\\\\text{adv}(\\\\cdot | s))$. We included these details and revised the whole Section 4 in the updated draft, which we think is a lot clearer overall.\\n\\nIn theory, $\\\\pi_{adv}$ could indeed match the current policy $\\\\pi_k$ since it gets supervision from $\\\\pi_k$. We have several reasons to think it does not: it gets information with partial coverage about $\\\\pi_k$, it has a limited optimization budget (few steps of SGD) and additionally we use a smaller learning rate to update $\\\\pi_{adv}$. Thus, it is more realistic to consider that it matches an unknown mixture of all previous policies.\\n\\nWe thank the reviewer for their suggestion of adding a small proof for the solution to the modified policy iteration scheme. The proof can be found in Appendix F in the revised draft. In a nutshell, we show that AGAC\\u2019s optimization problem belongs to a family of optimization problems (regularized policy iteration) whose closed-form solutions are known, and of which our solution is a simple variation.\\n\\nAbout Section 5.2, we indeed meant that the agent gets a partial view of the environment (a self-including 7x7 square, see Appendix B.1 for more information). We agree that partially-observable environments is a preferable terminology and employ it in the revised draft. We have also clarified the meaning of intrinsic reward in the context of AGAC, which is indeed the exploration bonus.\\n\\nRegarding Section 5.4, in Fig. 5 we see that the performance of reward-free AGAC stabilizes around an average return of ~0.15. Since the return of an episode is either 0 or 1 (depending on whether the agent reached the goal state or not), and since this value is aggregated across several episodes, it indicates that reward-free AGAC succeeds in ~15% of the procedurally-generated tasks. Comparatively, random agents have a zero average return, which we display in the updated Fig. 5. We rewrote unclear parts of Section 5.4 in the revised draft.\\n\\nAbout Section 5.5, they are right concerning Fig. 6: we present the last ten episodes of an agent trained in a procedurally-generated environment. For these ten episodes although, we fix the seed of the environment so that the layout remains the same, and the agent keeps learning. This allows us to study the evolution of behaviors across consecutive episodes. In Fig. 7, the agent is instead trained in a singleton environment (the layout is fixed, and the agent needs to solve the same task in the same environment at each episode). Hence the layout remains the same for each episode (not just the last ten). Note that in both experiments, there is no extrinsic reward because the goal of Section 5.5 is to investigate the capacity of our method to explore environments. We have reworked Section 5.5 to reflect their comments and also define \\u201csingleton\\u201d environments. We think the revised version provides a clearer description of the study.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"We thank the reviewer for their comments.\\n\\n1. We are on the same page with the reviewer regarding the exploration/exploitation trade-off. Actually, our method explicitly aims at balancing exploration and exploitation, as indicated at the end of Section 4.2: \\u201cIn particular, the $c$ coefficient behind the adversarial bonus is linearly annealed\\u201d. As a result, the agent is most encouraged to escape the adversary\\u2019s predictions at the beginning of training, which leads to better exploration, and we reduce this incentive across time to avoid instability. While this is a simple dynamic control scheme, we find it to perform quite well in our experiments, and it does not introduce additional hyperparameters. Since this is a central point of the algorithm, we emphasize it more in the revised draft.\\n\\n2. Thanks for suggesting to add heatmaps for different methods. We added heatmaps for several methods (uniform random policy, Count, RND, RIDE and AGAC) in Fig. 6 in the revised draft, for both fixed and procedurally-generated tasks. In a nutshell, we see that AGAC explores more efficiently and reaches further maze states than other methods, in both scenarios.\\n\\n3. We do not quite understand the remark about the red font. We did that to emphasize the difference between our approach and a standard actor-critic. We think it makes the paper clearer. Is the problem using color or the red color specifically? We can easily change the color if necessary, maybe red is not the most convenient for color blind people. Could the reviewer expand on this?\\n\\nWe hope to have addressed the reviewer's concerns. If this is the case, we invite the reviewer to consider updating their score. If not, additional comments would be welcomed.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"We thank the reviewer for their positive feedback and insightful comments. Below are replies to the questions the reviewer raised.\\n\\nRegarding the justification of the method (Section 4), we propose an updated version in the revised draft that we think makes the algorithm, high-level intuition and theoretical analysis clearer. For completion, we also include a short proof of the solution for the optimization problem of AGAC in the Policy Iteration setting. As for the specific terms in the equations being described as bonuses, we observe that in the RL literature the term \\u201cbonus\\u201d can be used for possibly negative quantities [1,2]. We consider the action log-probability difference as a bonus as it rewards diversified behavior, but it indeed could take negative values.\\n\\nWe acknowledge that the step function figures (Fig. 2 and 3) are not ideal. While we cannot realistically run all methods and update all curves during the course of the discussion period, we do commit to including all regular curves in figures for the camera-ready version. In the meantime, we added Tables 1 and 2 that quantify more precisely the performance of the different methods and should make the accompanying paragraph to Fig. 2 less ambiguous.\\n\\nIn the \\u201cNoExtrinsicReward\\u201d experiment, following their suggestion, we now report the performance of a uniform random policy (in Fig. 5). Note that, in this MiniGrid task, the agent must perform a specific action to open a door and go to the next room, and that episodes are limited to 200 timesteps of interaction, which in part explains why random agents never solve the task.\\n\\nWe have also reworked Section 5.5 to reflect their comments and that of Reviewer 1. In particular, we add a new figure (Fig. 6) to the paper, which shows the state visitation heatmaps of a random agent in the same environment together with those of RND, Count, RIDE and AGAC. This new study provides an additional comparative perspective and further illustrates the difficulty of the task.\\n\\nRegarding the parameters of the model, there are no shared parameters between the three entities. This is indicated at the end of Section 4.2: \\u201cNote that the parameters are not shared between the policy, the critic and the adversary [...]\\u201d.\\n\\nWe fixed all the typos that the reviewer identified.\\n\\nFinally, we edited the claim regarding the use of regularization for exploration (in Section 2), since we agree that it was not justified.\\n\\n[1] Savinov Nikolay, Anton Raichuk, Damien Vincent, Raphael Marinier, Marc Pollefeys, Timothy Lillicrap, and Sylvain Gelly. \\\"Episodic Curiosity through Reachability.\\\" In International Conference on Learning Representations 2018. \\n[2] Oudeyer Pierre-Yves, Frederic Kaplan, and Verena V. Hafner. \\\"Intrinsic motivation systems for autonomous mental development.\\\" IEEE transactions on evolutionary computation 11, no. 2 (2007): 265-286.\"}",
"{\"title\": \"Good paper with minor issues in presentation (AnonReviewer2)\", \"review\": \"The authors propose a modification of the well-known actor-critic algorithm, give a intuition for how it works (\\\"adding an adversary\\\") and present experiments showing state-of-the-art performance on certain tasks on the VizDoom and MiniGrid environment, beating several recent baselines. While the improvements on previous algorithms are incremental, this is very much in line with recent papers in the field and certainly a worthwhile direction of research.\\n\\nThe paper appears well argued and is relatively well written, with only minor unclear parts and a few typos, listed below. Its results are impressive and I recommend publication.\\n\\nAs for areas of improvement, I didn't immediately understand all parts of the formulae in section 4 (and neither did I fully grasp the motivating simplification in section 4.1). For instance, it is not immediately obvious to me why the additional term in equation (4) could be described as a \\\"bonus\\\", as I see no reason the sign of $V_\\\\phi(s_t)-\\\\hat{V}_t$ couldn't be negative.\\n\\nI would also propose reworking the figures. E.g., it's unclear to me where the comparison curves indicated in the label are in Figure 2. In various other figures, the baselines to compare to are indicated as step functions. I suspect this is because the raw data for these baselines wasn't available. Since the figures are trying to make a point about sample efficiency a table of numbers might not be the best alternative (although one might try that, perhaps giving the average return at several points during training?), but at least this reviewer isn't used to displaying individual data points as step functions of this sort.\", \"the_accompanying_paragraph_to_figure_2_also_read_somewhat_mysteriously_to_this_reviewer\": \"\\\"Results in Fig. 2 [...] indicate that AGAC clearly outperforms other methods in sample efficiency. Then, with nearly 2x more transitions, the graph in only ICM and RIDE match the score of AGAC.\\\" In combination with the lack of apparent ICM, RIDE, etc lines in Fig. 2 this makes for a confusing impression.\\n\\nCertain other claims in the paper seem overblown or indeed irrelevant, to a greater extent yet than usual in the field of reinforcement learning. For example, take the statement \\\"Note that in the configuration \\u201cNoExtrinsicReward\\u201d, the reward signal is not given. That is, the actor is not optimized for it, confirming that the agent exhaustively covers the environment.\\\" I'd propose to compare the performance here with that of a random agent, and perhaps also with a randomly sampled shallow neural network with sampling from output logits. The accompanying graph seems to be rather stable, as a random agent would be, and although it appears to show some improvement at the very beginning this may well be an artefact of the smoothing scheme used, or be otherwise postprocessing-related.\\n\\nA similar comment goes for section 5.5, \\\"Promoting Diversity\\\". To this reviewer, this entire section lacks motivation for why a quasi-uniform heatmap would be exceptionally good? At least heatmaps generated by a random agent should be shown as a point of comparison -- might the agent simply be getting stuck randomly by a process analogous to a constrained brownian motion? In fact, what kind of behaviour of an agent trained solely on the intrinsic goal of not being predictable by one of its subsystems could be considered \\\"good\\\" vs. \\\"bad\\\" in the first place? The ultimate goal of \\\"intrinsic motivation\\\"-type agents like RIDE is to optimize an objective metric (a reward function, number of levels solved, specific states attained). Of course it's interesting to have an agent that's able to visit every state of the MDP, but it's not clear if that's hard in the given situation and doubtful that there was more than pure chance as a cause.\\n\\nAs an additional question, I was wondering if there are no shared parameters in the three parts of the model? As having shared parameters is common, it would be nice to mention this out explicitly.\", \"further_typos\": \"\\\"generalization in is a key challenge in RL\\\"\\n\\nClosing parenthesis in \\\"Let $\\\\pi : S \\\\longrightarrow\\\\Delta A)$\\\" \\n\\nMissing $\\\\rm\\\\LaTeX$ reference in \\\"Fig. ??\\\" on page 14 of the appendix.\\n\\nAdditionally, while I cannot claim to know the relevant literature exceptionally well, the claim \\\"With the exception of Han & Sung (2020), which uses the entropy of the mixture between the policy induced from a replay buffer and the current policy as a regularizer, none of these methods explicitly use regularization to promote exploration.\\\" strikes me as dubious. For instance, Schmitt et al, \\\"Kickstarting deep reinforcement learning\\\" (2018), has a similar regularization scheme, which may \\\"promote exploration\\\" also in their case, as the mechanism of an algorithm is independent of its motivation or the particular angle used in its description.\\n\\nAll in all, this is a good paper which I enjoyed reading and I recommend it for publication in ICLR after some slight improvements.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"An adversarial extension of an actor critic model for efficient and generalisable exploration in very sparse reward environments with very competitive performance against sota methods.\", \"review\": \"The paper presents AGAC, an architecture for efficient, and generalisable, exploration in RL in settings with very sparse rewards. The model is compared against a number of SOTA methods for hard exploration problems on a number of procedurally generated environments, with very good performance results compared to the baselines.\\n\\nThe basic architecture extends an actor-critic model with an additional element, an adversary. The goal of the adversary is to predict correctly the actor's choices, minimizing its discrepancy from the actor. The goal of the actor, in addition to the standard maximization of expected return, is to maximize its discrepancy from the adversary, or in other words to stray away from its past self. The latter encourages exploration. AGAC quantifies the said discrepancy as the difference of the log propabilities of the actions under the actor and the adversary, the expectation of which under the actor is the KL divergence. \\n\\nThe actor-critic objective functions are adjusted as follows. The generalised advantage estimator contains now the discrepancy term, which encourages exploration. The critic's loss now includes as part of the target the KL divergence of the actor and the adversary. The adversary itself is trained to minimize the KL divergence from the actor.\\n\\nThe paper provides motivation for the design choises under a setting in which the policy loss is based on the Q value. Under such a setting the paper shows that the resulting objective, in addition to maximizing the return, it keeps the next update of the actor policy close to the previous actor and far from the adversary policy.\\n\\nThe experimental section includes a rich set of results over procedurally generated environments which evaluate how the exploration of AGAC does in unseen and very sparse reward environments. The evaluation results show rather important improvements over competive methods that seek to perform well in sparse reward environments. \\n\\nI had some some clarity and presentation issues with the paper, see just bellow, overall this seems to be a simple idea which brings strong performance improvements in challenging settings.\", \"detailed_questions\": \"With respect to the definition of the critic's objective function, eq 2. Does that\\nobjective function derive naturally from the new definition of the generalised advantage, \\neq 1? if yes a short explanation would be useful, if not what is the motivation for such \\na target definition in learning the value function?\\n\\nIn presenting the motivation of AGAC in section 4.1, my understanding is that the adversary \\nis seen as a policy that represents the k-1, k-2, ... ? past policies. I would like to see \\nsome more discussion on why is this so? Looking at the way the adversary is trained, eq 3, \\nI would say it tries to rather replicate the last, kth policy. \\n\\nIn the same section some more extensive discussion, maybe in the appendix, would be useful \\nwhen discussing the particular form of the solution for the policy iteration optimization \\nproblem. In the discussion of the solution, what is \\\\tau?\\n \\nSection 5.2, hard exploration with partially observable polucy, I have a couple of terminology\\nissues. \\n* I am not sure what I should understand here by partially-observable policy? does that mean that the policy \\nhas only access to a part of the environment/state description? as in a state-centric view? Wouldn't a better\\nterm be partially-observable environments?\\n* In the same section the paper presents the intrinsic reward results, though in the case of AGAC\\nthis has never been defined/described. I guess this refers to the exploration bonus, but it would \\nhave been useful to clarify that. \\n\\nSection 5.4 exploration with no reward. \\n* Probably a naive question: how do we see in fig 5 that the agent succeeds in a significant proportion of the episodes? is it the fact that we have an average non-zero return?\\n* I am not sure I see where the confirmation of the fact that the agent exhaustively covers the environment comes from?\\n\\n\\nSection 5.5, diversity. \\nIn figure 6 we see a still evolving policy, and I guess the bottom right heatmap of that figure is the final trained\\nagent. What is the difference from figure 7? I would have thought that in 7 we only see the trained agent, but then\\nthe label speaks of \\\"of the last ten episodes of an agent trained in a singleton environment\\\" maybe the labels of the \\nfigures got mixed?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"This paper presented an interesting solution to actor-critic framework with adversarial learning\", \"review\": \"This paper proposed a new actor-critic framework with adversary guide for deep reinforcement learning (RL), and introduced new Kullback-Leiblier divergence bonus term based on the difference between actor network and adversary network to deal with the exploration in RL. The experimental results showed the merit of this method for exploration. Some comments are provided as follows.\\n1) Although the authors conducted analysis to carry out properties of hyperparameters, there are still something unclear in hyperparameter setting. The exploitation and exploration should be balanced during learning procedure. RL algorithms generally exploit more at the early stage and then explore at the later stage. But, this work fixed the exploration reward hyperparameter in learning procedure. Although this solution seems to be converged by fixing $c$, it should be better to implement a dynamic control scheme.\\n2) Section 5.5 showed the state vision heat maps to illustrate the capability of exploration using the proposed method. However, it would be convincing to make comparison over different methods.\\n3) Some equations were written in red font which should not be allowed in ICLR conference.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
_77KiX2VIEg | On the Effectiveness of Deep Ensembles for Small Data Tasks | [
"Lorenzo Brigato",
"Luca Iocchi"
] | Deep neural networks represent the gold standard for image classification.
However, they usually need large amounts of data to reach superior performance.
In this work, we focus on image classification problems with a few labeled examples per class and improve sample efficiency in the low data regime by using an ensemble of relatively small deep networks.
For the first time, our work broadly studies the existing concept of neural ensembling in small data domains, through an extensive validation using popular datasets and architectures.
We show that deep ensembling is a simple yet effective technique that outperforms current state-of-the-art approaches for learning from small datasets.
We compare different ensemble configurations to their deeper and wider competitors given a total fixed computational budget and provide empirical evidence of their advantage.
Furthermore, we investigate the effectiveness of different losses and show that their choice should be made considering different factors. | [
"small data",
"deep learning",
"ensembles",
"classification"
] | Reject | https://openreview.net/pdf?id=_77KiX2VIEg | https://openreview.net/forum?id=_77KiX2VIEg | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"GM2NlPvg8Tk",
"5EYcPPkoc0S",
"DP1vRSICQat",
"1CiuqKDN9Np",
"ZR9ckL_z2m",
"mYmtEmU3n7T",
"q9RX8UD3EJY",
"70xDscr5jNM",
"997dEeb_37E",
"PpkbaEjVXM"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040353376,
1606302821473,
1606302627120,
1606302043828,
1606301762330,
1606301549950,
1604196978573,
1603964845736,
1603926601742,
1603926514702
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3500/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3500/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3500/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3500/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3500/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3500/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3500/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3500/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3500/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"The paper received negative and borderline reviews. The reviewers have raised several concerns about the novelty of the approach and the lack of convincing experiments. The rebuttal only partially addresses these concerns. Overall, the area chair agrees with the reviewer's assessment and follows their recommendation.\"}",
"{\"title\": \"Answer to Reviewer 4\", \"comment\": [\"We appreciate that the Reviewer finds our work promising and well-motivated. We thank the reviewer for the provided suggestions that we addressed in the revised version of our manuscript:\", \"We added two more popular datasets (SVHN and Stanford Dogs). We added more vision tasks since our focus regards image classification and the computer vision field. It would be definitely interesting to try our approach to non-vision tasks. We left it as future work.\", \"We explored more the trade-off between depth and ensembles dimension. More in detail, we varied the depth of the baseline network and the number of ensembles accordingly. We invite the Reviewer to refer to Figure 1 and Table 5 for the results. Moreover, we trained diverse architectures at different depths and compared them with the corresponding ensembles (Table 2).\", \"We also added comparisons for the width dimension. More details are given in the reply to Reviewer 3. In the revised version of the paper, the results of this analysis are in Tables 2, 5, and Figure 1. We discussed these results in Section 5.3.\"]}",
"{\"title\": \"Answer to Reviewer 3\", \"comment\": \"We thank the Reviewer for the multiple suggestions and comments. We definitely agree on the fact that the community does not have a clear idea on how to tackle the problem of learning from a small sample since this is a rarely experimented domain, yet very important in practice.\\nFor this reason, we fairly believe that the results reported in this revised version of our paper can be exploited by the community both as baselines for future works and as a practical guide for solving similar problems.\", \"we_made_several_additions_and_changes_to_address_the_weakness_of_our_paper\": \"- In Section 5.4 we motivated the better sample efficiency of ensembles following models proposed in [Geiger et al. 2019] and [Geiger et al. 2020] that we noticed to be in accordance with our experimental findings.\\nWe also motivated the difference in performance among the two losses and provided a hypothesis following the model proposed in [Geiger et al. 2019].\\n\\n\\n- We added the reference to work [Dvornik et al.] since this is a use of ensembles in a somewhat related domain. However, we kindly point out that few-shot learning and learning from a small sample are implemented by the community in two different ways as we wrote in the introduction. Despite their names are semantically similar, in few-shot learning we still exploit a large base set from which it is possible to learn, \\u201cas we know it with big and large nets\\u201d, good representations that then we transfer to more tiny and difficult problems. In our scenario, the lack of data from the beginning presents different challenges that are still not clear to the community, as also noted by the Reviewer, and could not be encountered in few-shot learning. Therefore, in our opinion, it is not straightforward to assume that good performance of deep ensembles in few-shot learning would imply the same performance with small datasets.\\n\\n\\n- We added several ablation studies regarding the variation of depth, width, and a number of nets in an ensemble. Refer to Tables 2 and 5 and, in particular, to Section 5.3. Further, we added two more datasets (SVHN and Stanford Dogs) to get more solid empirical evidence.\\nIt is reasonable to doubt if a ResNet-110 is a fair baseline on small datasets. To better compare our results, we considered several more network architectures and layouts as baselines: ResNet-110, VGG-9, DenseNet-BC-52, and DenseNet-BC-121; we varied the depth of baseline ResNets (26, 50, and 110 layers); we added results of shallower and wider networks (ResNet-8-16, ResNet-8-36, ResNet-8-50, ResNet-8-72; VGG-5-32, VGG-5-76; DenseNet-BC-16 (k=12,30), DenseNet-BC-62 (k=32,56)). All the above architectures and layouts have been compared with the corresponding ensemble of networks of the same family with the same computational budget.\\nIn almost all cases, ensembles outperformed the deeper/wider variants, with clear margins, especially over deeper nets (Tables 2, 4, 5).\\n As also suggested by the Reviewer, we made a test with stronger data augmentation on CIFAR-10 adding color distortion and random erasing but still obtained similar trends (Table 5).\\nWe motivate these consistent results following models proposed in [Geiger et al. 2019] and [Geiger et al. 2020] that we noticed to be in accordance with our experimental findings.\\n\\n[Dvornik et al.] \\\"Diversity with Cooperation: Ensemble Methods for Few-Shot Classification\\\"\\n\\n[Geiger et al. 2019] \\u201cJamming Transition as a Paradigm to Understand the Loss Landscape of Deep Neural Networks.\\u201d\\n\\n[Geiger et al. 2020] \\u201cScaling Description of Generalization with Number of Parameters in Deep Learning\\u201d.\"}",
"{\"title\": \"Answer to Reviewer 2\", \"comment\": \"We would like to thank the Reviewer for the constructive comments and we appreciate that finds our approach promising. We performed a more comprehensive evaluation as suggested:\\n\\n\\n- We empirically show the improvement with respect to the state of the art by comparing our ensembles with [Arora et al. 2020] and [Bietti et al. 2019] following the same evaluation protocol of the original papers (Table 1). Note also that in [Bietti et al. 2019] authors kept a held-out training set for tuning hyper-parameters while, in our case, we didn't do it and left default parameters.\\nWe definitely share the same objective and experimental evaluation with these two papers. Therefore, we agree on the fact that ours and their approaches are directly comparable.\\nInstead, approaches proposed in [Drucker and Lecun] and [Miyato et al.] were not evaluated and tested on small image datasets falling outside the aim of our experimental evaluation.\\nFor what concerns evaluation, we used the same evaluation proposed in [Arora et al. 2020] repeating the same experiment multiple times (i.e. different sub-sampled splits for each N) and averaging the test accuracy obtained in each run (i.e. the best testing accuracy scored by the model across the epochs).\\n\\n\\n- We provided in the first submission and in this new version confidence estimates in terms of standard deviations for the different runs in all tables and plots. We added two more datasets (SVHN and Stanford Dogs), we ran several experiments concerning the variation of depth and width (Tables 2 and 5) and we discussed the results in Section 5.3.\\n\\n[Arora et al. 2020] \\\"Harnessing the power of infinitely wide deep nets on small-data tasks.\\\" \\n\\n[Bietti et al. 2019] \\\"A Kernel Perspective for Regularizing Deep Neural Networks\\\"\\n\\n[Drucker and Lecun] \\\"Improving generalization performance using double back-propagation\\\" \\n\\n[Miyato et al.] \\\"Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning\\\"\"}",
"{\"title\": \"Answer to Reviewer 1\", \"comment\": \"We\\u2019d like to thank the Reviewer for the positive comments regarding the organization, structure, and topic of our work.\", \"we_made_several_additions_and_changes_to_address_the_weakness_of_our_paper\": \"- We agree on the fact that our ensemble method is not a novel approach since we have purposely chosen to use a well-known technique. However, in regard to deep ensembles with small data, this paper is the first one to perform a comprehensive analysis and detailed experimental evaluation. More details are given also in the reply to Reviewer 3.\\n\\n\\n- We added more experiments on two more datasets (SVHN and Stanford Dogs) and two more architectures (VGG and DenseNet).\\n\\n\\n- We agree on the fact that ResNet-110, at a first sight, could be considered as a weak baseline in the case of small datasets.\\nIn reply to this suggestion (also shared by other reviewers), we added tests with other baselines, i.e., networks of different depths and widths. More details are given in the reply to Reviewer 3.\\nIn almost all tested comparisons between ensembles and deep/wide networks, including variations in network layouts, datasets, and data augmentation, ensembles remain the most performing models (new Tables 2, 4, 5 in the revised submission).\\nIn Section 5.4 we motivated the better sample efficiency of ensembles following models proposed in [Geiger et al. 2019] and [Geiger et al. 2020] that we noticed to be in accordance with our experimental findings.\\n\\n\\n- Minor issues: we adjusted references and corrected typos.\\n\\n\\n[Geiger et al. 2019] \\u201cJamming Transition as a Paradigm to Understand the Loss Landscape of Deep Neural Networks.\\u201d\\n\\n[Geiger et al. 2020] \\u201cScaling Description of Generalization with Number of Parameters in Deep Learning\\u201d.\"}",
"{\"title\": \"General comments\", \"comment\": \"We appreciated all the comments from the reviewers that allowed us to extend the experimental analysis and consequently to better support our claims. In particular, we agree that a more comprehensive evaluation is important to assess the improved performance of ensembles of NN with small datasets. Therefore, we executed many additional experiments, fully reported in the paper and in the appendices. As explained more in detail in the replies to the reviewers below, all the performed experiments confirm that ensembles of NN outperform other methods when considering different datasets, different base NN models, and different layouts of the NN.\\nMoreover, in comparison with previous literature published on learning with small data sets, the revised version of our paper provides a more comprehensive analysis of the problem addressing many experimental variabilities. We thus believe that this paper will be very relevant for the community addressing this problem in future works.\"}",
"{\"title\": \"Through experiments that are consistent with prior literature, but unclear if findings are novel.\", \"review\": \"This paper tackled a studying the effect of ensembling neural networks to particularly improve accuracy in the low data regime. The paper is well laid out and the experiments are somewhat well motivated as, much experimental study in DL has been around *larger and larger* datasets. The paper seems well written and the experiments are well constructed with means and standard deviations reported for every experiment. These experiments do match prior subsampling experiments literature (at least for Cifar-10).\\nOne minor call out I'd make is that Shankar Et al (https://arxiv.org/pdf/2003.02237.pdf) shows that a non ensembled CNN has similar performance to a CNTK on subsampled Cifar-10.\\n\\nMy primary problem with this work is a the somewhat expected nature of the findings and the limited scope of the experiments. First of all while Cifar-10 and Cifar-100 are great datasets that researchers should benchmark on, but for this line of work I would have liked to see more. How do these trends look like on ImageNet? What about for non vision tasks, does something similar happen on SQUAD.\\n\\nFurthermore I'd encourage the authors to explore the tradeoff between number of ensembles vs depth of network more. I'd like to see a 2/3d plot with depth/size of network on one axis, and number of ensmbles on the other, how does the tradeoff frontier look like? It is simply not convincing to just look at a resnet20 vs resnet-8.\\n\\nI also find the loss comparison direction less interesting than the tradeoff between depth/width/number of ensembles.\\n\\nAnyway the reason for my score is that this seems like a promising direction but looks like early work. With a couple more datasets and a more thorough experimentation section I'd accept this paper.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes an ensemble learning method for deep networks in the low data regime. Specifically, the authors empirically compare several ensemble configurations by varying the complexity of base members given a total fixed computational budget. Experiments are conducted on CIFAR-10 and CIFAR-100 datasets. It shows that good results are obtained by keeping low the complexity of single models and increasing the ensemble dimension.\", \"paper_strengths\": [\"The paper is well written and organized. It is easy to follow.\", \"The topic, i.e., ensemble learning for deep networks, is interesting and deserves further studies.\"], \"paper_weaknesses\": [\"The novelty of this paper is low. No significant technique contribution is performed in this paper. The proposed method is a simple weighted average ensemble. Additionally, the loss functions employed were developed in previous work, which are also not the contributions by the authors. Thus, does it seem an empirical study paper?\", \"If it is an empirical paper, the experiments are also weak. Only two small-scale (w.r.t. deep learning) datasets, i.e., CIFAR-10 and CIFAR-100 are conducted for evaluation comparisons. Meanwhile, only the ResNet family is utilized as the backbones. It is encouraged to perform comprehensive experiments on large-scale, diverse (e.g., object-centric data and scene-centric data) vision datasets, as well as different network architectures (e.g., ResNets, VGGs, MobileNets, etc). Current experimental results are not sufficient enough to support the conclusions of this paper.\", \"More analyses are required, such as attempting to reveal why the observations in this paper can happen? In addition, is there a possibility that few training data cannot support the training of big networks (e.g., ResNet-110), rather than the effectiveness of the proposed ensemble process? Thus, small data trained with small networks (e.g., ResNet-8) can achieve better results since small networks can be trained until parameter convergence.\"], \"minor_issues\": [\"The references are not formal. For example, several references have inconsistent formats, e.g., \\\"In Proceedings of the 28th International Conference on machine learning (ICML-11), \\\" of [Deisenroth and Rasmussen, 2011] vs. \\\"In international conference on machine learning,\\\" of [Gal and Ghahramani, 2016].\", \"There are also several typos in this paper. The authors should carefully proofread the paper.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"The paper is missing explanations of the observed phenomena\", \"review\": \"Summary:\\nIn this paper, the authors provide a series of experimens where they show that when dealing with a very small dataset, a single very deep network is outperformed by an ensemble of multiple more shallow networks. More specifically, the authors artificially create training sets from CIFAR10 and CIFAR100 datasets where the number of images per category is limited to 10-250 samples. Then, they compare the test performance of ResNet101, an ensemble of 5 ResNet 20 and an ensemble of 20 ResNet8, trained for classification with different loss functions, i.e. cross-entropy and cosine distance. The bottom line is that the ensembles work better and have a comparable computational complexity in FLOPs.\", \"strengths\": [\"The topic of the paper fits well in the paradigm of representation learning.\", \"The work demonstrates that the community does not have a good understanding of what kind of models must be used when little data is available for training and brings attention to classical techniques for variance reduction.\"], \"weaknesses\": \"- The paper is basically a compilation of experiments with no explanations of the observed phenomena. The authors perform a set of experiments with already known methods and merey propose the reader to look at the results. I would like to know not only that we need to do ensembles of small networks but also why these ensembles are more efficient than a single deep network in the low data scenario. Why do we observe the difference between using cross-entropy and cosine losses, depending on the network, dataset and its size?\\n- The novelty of the paper is limited. It is already known from [1] that using ensemble methods in few-shot problems helps the performance a lot. Even if the authors propose a different evaluation strategy, referencing existing work in this field is still required.\\n- Abblation studies are missing. To be more convinced by the experiments I would like to see how the performance differes if you vary the ensemble size and the single network's capacity. That may improve our understanding of the phenomena too. Experimenting with more datasets may help to answer the question of why the behavior of different loss functions is so different between the two used datasets.\\n- A question of wheather a single ResNet101 with vanilla training is a fair baseline. It\\u2019s been known [2] that to achieve better results on a small-data task it is beneficial to train deeper networks with proper regularization rather than shallow networks. Using an ensemble of N networks is identical to using a single network where each layer is N times wider; each convolutional layer will have N times more filters (that could be obtained by concatenating the weights of the original network), however the convolution operation now changes from a standard to a grouped one (has N groups). The resulting output of the fused network must be averaged across the groups to match the ensemble definition exactly. The group-separated convolutions restrict representational power of the network and introduce stronger regularization, which is most likely the reason for the ensemble to perform better. If we speak about regularizing ResNet101 what kind of regularization did you introduce to adapt it to the small size dataset? It is possible that vanilla training with higher weight decay and more data augmentation is not actually efficient in the case of ResNet101 on the small datasets. Instead, it may require introducing more aggressive data aufmentation [3,4] or some structural changes must be introduced, as for example in [5].\\n\\n\\nEven though the direction of reserch is interesting and deffinitely useful for the community the work still needs development to be recommended for acceptance.\\n\\n\\n[1] - Dvornik et.al \\\"Diversity with Cooperation: Ensemble Methods for Few-Shot Classification\\\"\\n[2] - Geiger at.al \\\"The jamming transition as a paradigm to understand the loss landscape of deep neural networks\\\"\\n[3] - DeVries et.al \\\"Improved Regularization of Convolutional Neural Networks with Cutout\\\"\\n[4] - Zhang et.al \\\"Mixup: beyond empirical risk minimization\\\"\\n[5] - Gastaldi \\\"Shake-Shake regularization\\\"\\n\\n\\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\\nUpdate after the author's comment:\\n\\nI appreciate the effort of the authors to add more experiments that all suggest that the ensembles tend to perform better in the small data regime. This makes the case stronger and the story more compelling, hence I raise my score. However, the paper is still missing the core explanations or a hint of why this may be happening, hence I still can not recommend the paper for acceptance.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"needs more extensive evaluation and comparison with related methods\", \"review\": \"The paper studies the performance of ensembles of deep networks on small-data tasks taken from subsets of Cifar10 and Cifar100, with either the cross-entropy loss or the cosine loss. The authors conduct extensive experiments on these datasets with various choices of sample size, and are careful of evaluating models with comparable computational budget, by considering ResNet architectures of varying depths and with different numbers of models in the ensemble. They find that ensembles of small models tend to outperform single large models.\\n\\nThe approach seems promising, and the extensive experiments provide a comprehensive picture of the performance of various choices of model and ensemble sizes on the Cifar datasets.\\nNevertheless, the proposed method is not compared to any existing models and regularization approaches which are applicable to small datasets, including the cited references or other approaches (e.g. [1-4]). This makes the statement of \\\"improving the state-of-the-art\\\" questionable. The evaluation also does not seem to perform adequate cross-validation (e.g. the authors use the \\\"best test performance\\\" across any epoch).\\n\\nI thus encourage the authors to perform a more comprehensive evaluation of the proposed approach, and further compare to other methods. Some confidence estimates for comparing methods would also be useful, as such small datasets may lead to large variance across different choices of samples. Also, it would be interesting to see how the approach performs beyond just the Cifar dataset, perhaps in other domains where data is more scarce. Other ways to control for computation in each model of the ensemble would be interesting, e.g. how would controlling width instead of depth affect performance?\\n\\n[1] Arora et al \\\"Harnessing the power of infinitely wide deep nets on small-data tasks.\\\"\\n[2] Bietti et al. \\\"A Kernel Perspective for Regularizing Deep Neural Networks\\\"\\n[3] Drucker and Lecun \\\"Improving generalization performance using double back-propagation\\\"\\n[4] Miyato et al. \\\"Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning\\\"\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
4HGL3H9eL9U | AT-GAN: An Adversarial Generative Model for Non-constrained Adversarial Examples | [
"Xiaosen Wang",
"Kun He",
"Chuanbiao Song",
"Liwei Wang",
"John E. Hopcroft"
] | With the rapid development of adversarial machine learning, numerous adversarial attack methods have been proposed. Typical attacks are based on a search in the neighborhood of input image to generate a perturbed adversarial example. Since 2017, generative models are adopted for adversarial attacks, and most of them focus on generating adversarial perturbations from input noise or input image. Thus the output is restricted by input for these works. A recent work targets unrestricted adversarial example using generative model but their method is based on a search in the neighborhood of input noise, so actually their output is still constrained by input. In this work, we propose AT-GAN (Adversarial Transfer on Generative Adversarial Net) to train an adversarial generative model that can directly produce adversarial examples. Different from previous works, we aim to learn the distribution of adversarial examples so as to generate semantically meaningful adversaries. AT-GAN achieves this goal by first learning a generative model for real data, followed by transfer learning to obtain the desired generative model. Once trained and transferred, AT-GAN could generate adversarial examples directly and quickly for any input noise, denoted as non-constrained adversarial examples. Extensive experiments and visualizations show that AT-GAN can efficiently generate diverse adversarial examples that are realistic to human perception, and yields higher attack success rates against adversarially trained models.
| [
"adversarial examples",
"adversarial attack",
"generation-based attack",
"adversarial generative model",
"non-constrained adversarial examples"
] | Reject | https://openreview.net/pdf?id=4HGL3H9eL9U | https://openreview.net/forum?id=4HGL3H9eL9U | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"MkUtlPJPl4x",
"zuUJyBYwzYu",
"KHmEWeM1IMq",
"zYPa7TeDoq4",
"pZPq2jKaLAw",
"8x3IC1ZHGB0",
"d24gIqvaa8x",
"mfwKnrx953P",
"9qJB45VB5iR"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040367914,
1606275008399,
1606274906086,
1606274435578,
1606274304012,
1606273914406,
1603901566094,
1603631740569,
1603185748874
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3499/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3499/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3499/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3499/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3499/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3499/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3499/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3499/AnonReviewer4"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"I thank the authors and reviewers for their discussions about this paper. The proposed AT-GAN is a GAN-based method to generate adversarial examples. Similar methods (e.g. Song et al) have been proposed to use GANs to generate adv. examples more efficiently. Authors show their method has some numerical benefits. However, more experiments are needed to further justify it. Also, creating \\\"unrestrictive\\\" adv. examples can cause a risk of generating samples where the true label is flipped. Authors need to clarify it. Given all, I think the paper needs a bit of more work to be accepted. I recommend authors to address the aforementioned concerns in the updated draft.\\n\\n-AC\"}",
"{\"title\": \"Response to Review #4 (part 2/2)\", \"comment\": \"3.\\t**Q**: I would like to know if transfer learning technique could be used to reduce the number of required adversarial examples.\\n\\n **A**: AT-GAN transfers the conditional generator which can craft benign examples to generate adversarial examples. But different from transfer learning, the aim of AT-GAN is to generate the examples that fool the target classifier guaranteed by Eq. (3)\\n\\n $$L_1 = \\\\mathbb{E}_{z\\\\sim p_z}[H(f(G_{attack}(z,y_s)), y_t)],$$\\n\\n and are realistic to humans guaranteed by Eq. (4)\\n\\n $$L_2 = \\\\mathbb{E}_{z\\\\sim p_z}[\\\\|G_{original}(z,y_s) + \\\\rho - G_{attack}(z,y_s)\\\\|]_p.$$\\n\\n With the two loss functions, AT-GAN is trained to transfer the generator $G_{original}$ that models the distribution of benign examples to $G_{attack}$ that models the distribution of adversarial examples. This process is different from the training process of GANs and we do not need any adversarial examples for the transferring process.\\n\\n4.\\t**Q**: The attack transferability.\\n\\n **A**: We add experiments and use adversarial examples generated on Model A to attack Model C for MNIST dataset. The results are depicted as follows:\\n\\n | | Nor. | Adv. | Ens. | Iter. |\\n |-|------|------|------|-------|\\n | FGSM | 46.7 | 4.2| 1.7| 4.6\\n | PGD | **97.5**| 6.5| 4.1| 4.1\\n |R+FGSM| 82.3 | 6.7| 4.8| 4.1\\n |Song's| 23.8| 20.8 | 20.6 | **20.1**\\n |AT-GAN| 65.3| **24.6** | **27.9**| 17.2\\n\\n We can see that the examples generated by AT-GAN exhibit moderate transferability.\\n\\n5.\\t**Q**: The adversarial examples used for adversarial training.\\n\\n **A**: Adversarial training aims to defend various adversarial attacks but not limited to the adopted attack for adversarial training. Therefore, the examples we used are not generated by AT-GAN. We adopt three adversarial training in our experiments: a) adversarial training (Adv.) [1] uses adversarial examples generated by FGSM, b) ensemble adversarial training (Ens.) [2] uses adversarial examples generated by R+FGSM on the ensemble of models, c) Iterative adversarial training (Iter.) [3] uses adversarial examples generated by PGD.\\n\\n [1] Ian Goodfellow, Jonathon Shlens, Christian Szegedy. Explaining and Harnessing Adversarial Examples. ICLR 2015.\\n\\n [2] Florian Tram\\u00e8r, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, Patrick McDaniel. Ensemble Adversarial Training: Attacks and Defenses. ICLR 2018.\\n\\n [3] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, Adrian Vladu. Towards Deep Learning Models Resistant to Adversarial Attacks. ICLR 2018.\\n\\n6.\\t**Q**: Number of examples and time needed to train AT-GAN.\\n\\n **A**: As in A3, we do not need any adversarial examples for the transferring process. For the training time, it takes about 8 minutes for transferring the generator of AT-GAN for Model A on MNIST. As we can craft numerous adversarial examples directly once the generator is transferred, we do not consider such time in the comparison for crafting 1,000 examples in the experiments. We have clarified it in the revision, thank you.\\n\\n7.\\t**Q**: The generating capability, i.e., generating failure ratio, of AT-GAN.\\n\\n **A**: We use the same input, and randomly pick 100 images for each category of MNIST generated by AT-GAN and the original generator, respectively. We then conduct human evaluation to determine whether each example is realistic. The evaluation results on the percentage of realistic images are as follows: \\n\\n | Category | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | Average |\\n |-------------|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|----------|\\n | Original(%) | 100 | 100 | 93 | 94 | 98 | 96 | 99 | 100 | 98 | 100 | 97.8 |\\n | AT-GAN(%) | 100 | 100 | 85 | 91 | 80 | 90 | 97 | 98 | 92 | 100 | 93.3 |\\n\\n We see that adversarial examples in some categories (e.g. 2, 4) are harder to be semantically meaningful than other categories (e.g. 0, 1). On average, however, the generating capability is close to that of the original generator. \\n\\n We have added the human evaluation in the revision. Thank you.\"}",
"{\"title\": \"Response to Review #4 (part 1/2)\", \"comment\": \"Thank you for the valuable comments and suggestions. Below, we would like to address your main concerns:\\n\\n**General comments**: The main idea is to first train a normal GAN and then use the idea of transfer learning based on adversarial examples. The aim sounds good but the authors fail to clearly distinguish the idea with the exiting related methods theoretically or numerically. The idea of transferring is good (although not new), but after checking the implementation details, I have to say in the current version, the fact of transferring is quite limited.\\n\\n**A**: We try our best to clarify the differences of our work with existing related works, and address all your concerns in the following as well as in the revised paper. Note that we did not use any example for the transferring. We also add experiments following your suggestions, which could help improve our manuscript. Thank you.\\n\\n1.\\t**Q**: comparison with existing related methods on generating adversarial perturbation (AdvGAN, AI-GAN).\\n\\n **A**: All the perturbation-based adversarial attacks can be formulated as:\\n $$ min\\\\|\\\\delta\\\\|_p, s.t., f(x+\\\\delta) \\\\neq y, $$\\n where $y$ is the true label of $x$. Both AdvGAN and AI-GAN aim to train a generator that can craft adversarial perturbation $\\\\delta = G(x)$ ($x$ is an image):\\n $$ min_G |G(x)|_p, s.t., f(x+G(x)) = y_t \\\\neq y,$$\\n where $y$ is its label and $y_t$ is target label. It is consistent to the goal of perturbation-based adversarial attacks. \\n\\n In contrast, AT-GAN aims to generate adversarial examples through modeling the distribution of adversarial examples by transferring a pre-trained generator (z is a noise):\\n $$ min_G |G(z,y) \\u2013G_{ori}(z,y)|_p s.t. f( G(z,y) ) = y_t \\\\neq y.$$\", \"the_differences_are\": \"a) Different input: AdvGAN and AI-GAN take natural images as input while AT-GAN takes random noise as input.\\n\\n b) Different output: AdvGAN and AI-GAN output the adversarial perturbation for the input image while AT-GAN outputs the adversarial example directly.\\n\\n c) Different training procedure: AdvGAN is similar to train a normal GAN and AI-GAN also considers the adversarial examples for training, while AT-GAN transfers a pre-trained generator to model the distribution of adversarial examples and do not need adversaries for transferring.\\n\\n We agree that our method could not generate adversarial perturbation for a natural image, but the goal of our method is different. We aim to learn the distribution of the adversaries so that the output looks like a natural image but misclassified by the target model. Under such scenario, we could generate diverse adversaries that are not limited to the natural image. AdvGAN and AI-GAN also could not generate adversarial examples directly as we did. Moreover, generating non-constrained adversarial examples is harder and might be very useful in some scenarios. For instance, it can help implement adversarial training to improve the model robustness in few-shot learning.\\n\\n Surely you could first borrow a normal GAN to generate image and then use this image to add perturbation by any perturbation based methods, not only AdvGAN, AI-GAN, but also any gradient based methods like FGSM, PGD. But this is out of the scope of this discussion. \\n\\n2.\\t**Q**: comparison with the work of Song's.\\n\\n **A**: Song's method searches over the neighborhood of the input noise for the pre-trained AC-GAN in order to find a noise whose output image is misclassified by the target classifier. Their method is essentially based on search, while AT-GAN is trained as an adversarial generative model. The generating capability of both Song\\u2019s and ours rely on the GAN. We could also implement AT-GAN on other well-designed GANs for other datasets. Addressing your concern, we implement AT-GAN on CIFAR-10 dataset using StyleGAN2-ada (StyleGAN2 with adaptive discriminator augmentation) [1], a recently proposed conditional GAN. The target classifier is wide ResNet w32-10 [2] by normal training (Nor.) and Iterative training (Iter.). The attack success rates are as follows:\\n\\n | Model | PGD | FGSM | AT-GAN |\\n |:--------:|:------:|:------:|:------:|\\n | Nor.(%) | 100.0 | 92.3 | 93.5 |\\n | Iter.(%) | 54.6 | 49.2 | 73.0 |\\n\\n On normally trained models, PGD achieves attack success rate of 100% while AT-GAN achieves attack success rate of 93.5%. However, the adversarially trained model exhibits little robustness against AT-GAN and AT-GAN achieves attack success rate of 73.0%. In Figure 5 in the Appendix D, we illustrate some generated adversarial examples on CIFAR-10 dataset. Thank you for the valuable suggestion. \\n\\n [1] Tero Karras, Miika Aittala, Janne Hellsten, Samuli Laine, Jaakko Lehtinen, Timo Aila. Training Generative Adversarial Networks with Limited Data. NeurIPS 2020.\\n\\n [2] Sergey Zagoruyko, Nikos Komodakis. Wide Residual Networks. BMVC 2016.\"}",
"{\"title\": \"Response to Review #1 (part 2/2)\", \"comment\": \"2.\\t**Q**: The idea seems incremental. The novelty could be further summarized by highlighting the difference with most related works including but not limited to the aforementioned ones.\\n\\n **A**: AT-GAN aims to learn the distribution of adversarial examples so as to generate semantically meaningful adversaries by transferring a pre-trained GAN, and there are no adversarial examples involved during the training. Once transferred, AT-GAN can directly generate adversarial examples from any input noise.\", \"here_we_highlight_the_differences_with_most_related_works_as_follows\": \"+ **NAG, AdvGAN and AI-GAN vs. AT-GAN.** NAG [1], AdvGAN [2] and AI-GAN [3] focus on crafting adversarial perturbations by GANs. NAG [1] takes random noise as input and crafts image-agnostic adversarial perturbation. Such perturbation can be added to many natural images to craft the adversaries. AdvGAN [2] and AI-GAN [3] both use natural images as inputs, and generate the corresponding adversarial perturbations using GAN for the input image. AdvGAN fixes the target class for the generation, while AI-GAN uses projected gradient descent (PGD) attack to inspire the training of GAN and the target class is used as an input. In contrast, AT-GAN does not use any natural image as the input, but generates the adversaries directly from any random noise. Further, compared with AI-GAN, we do not use adversarial examples for the training. \\n\\n + **Song's vs. AT-GAN.** Song's method [4] searches over the neighborhood of the input noise for the pre-trained AC-GAN in order to find a noise whose output image is misclassified by the target classifier. They define such adversaries as the unrestricted adversarial examples, however, their adversaries are still constrained by the original input noise. Their method is essentially based on search, while AT-GAN is trained as an adversarial generative model and our output is not constrained by any neighborhood. \\n\\n + **PS-GAN vs. AT-GAN** PS-GAN [5] pre-processes an input seed patch (a meaningful small image) to adversarial patch that will be added to a natural image (such as a traffic sign) to craft an adversarial example, and an attention model is used to locate the attack area on the natural image. Their method uses GAN to generate meaningful adversarial patch based on the original patch, and paste that patch on a natural image. Though GAN is involved, their task is very different from ours.\\n\\n In summary, existing works either are based on a search in a neighborhood of the input, or use a generative model to generate the perturbations or patches which will then be added to a natural image. Differs to existing works, we aim to model the distribution of adversarial examples by transferring a pre-trained GAN and generate non-constrained adversarial examples directly and quickly from any random noise. \\n\\n We have highlighted the differences and make it clearer in the revision.\\n\\n [1] Konda Reddy Mopuri, Utkarsh Ojha, Utsav Garg, R. Venkatesh Babu. NAG: Network for Adversary Generation. CVPR 2018.\\n\\n [2] Chaowei Xiao, Bo Li, Jun-Yan Zhu, Warren He, Mingyan Liu, Dawn Song. Generating Adversarial Examples with Adversarial Networks. IJCAI 2018.\\n\\n [3] Tao Bai, Jun Zhao, Jinlin Zhu, Shoudong Han, Jiefeng Chen, Bo Li. AI-GAN: Attack-Inspired Generation of Adversarial Examples. arXiv Preprint arXiv:2002.02196, 2020.\\n\\n [4] Yang Song, Rui Shu, Nate Kushman, Stefano Ermon. Constructing Unrestricted Adversarial Examples with Generative Models. NeurIPS 2018.\\n\\n [5] Aishan Liu, Xianglong Liu, Jiaxin Fan, Yuqing Ma, Anlan Zhang, Huiyuan Xie, Dacheng Tao. Perceptual-Sensitive GAN for Generating Adversarial Patches. AAAI 2019.\\n \\n3.\\t**Q**: Some experiment settings are not clear. A brief introduction to Model A to B should be given in the main paper, though the details is provided in Appendix.\\n\\n **A**: We make the experiment settings clearer, and add a brief introduction of Model A to D to the main paper in the revision. Thank you again for the clarity check.\"}",
"{\"title\": \"Response to Review #1 (part 1/2)\", \"comment\": \"We appreciate the reviewer\\u2019s constructive suggestions and have performed the corresponding revisions. Below, we would like to address your main concerns.\\n\\n1.\\t**Q**: It is expected to see the performance and generated examples with different $\\\\rho$.\\n\\n **A**: We add experiments to investigate the impact of using different $\\\\rho$ in the loss function. As $\\\\rho$ could be constrained by both $\\\\ell_0$ and $\\\\ell_\\\\infty$ norm, we test various bounds, using Model A on MNIST dataset, for $\\\\rho$ in $\\\\ell_0$ and $\\\\ell_\\\\infty$, respectively.\\n\\n a) We first fix $\\\\|\\\\rho\\\\|_\\\\infty=0.5$ and try various values for $\\\\|\\\\rho\\\\|_0$, i.e. 0, 100, 200, 300, 400 (the maximum possible value is 784 for 28$\\\\times$28 input). The attack success rates (ASR) are as follows:\\n\\n $\\\\|\\\\rho\\\\|_0$|0|100|200|300|400\\n -|-|-|-|-|-\\n ASR(%)|98.9|98.8|98.7|96.7|95.8\\n\\n We can observe that different values of $\\\\|\\\\rho\\\\|_0$ only have a little impact on the attack success rates, and the performances are very close for $\\\\|\\\\rho\\\\|_0$ = 0, 100, 200. Figure 6 in Appendix D further illustrates some generated adversarial examples, among which we can see there exist some slight differences on the examples. When $\\\\|\\\\rho\\\\|_0=0$, AT-GAN tends to change the foreground (body) of the digits. When we increase the value of $\\\\|\\\\rho\\\\|_0$ (100 and 200), AT-GAN is more likely to add tiny noise to the background and the crafted examples are more realistic to humans (for instance, smoother on digit 4). But if we continue to increase $\\\\|\\\\rho\\\\|_0$ (300 or 400), AT-GAN tends to add more noise and the quality of the generated examples decays. To have a good tradeoff on attack performance and generation quality, we set $\\\\|\\\\rho\\\\|_0=200$.\\n\\n b)\\tWe then fix $\\\\|\\\\rho\\\\|_0=200$ and test different values for $\\\\|\\\\rho\\\\|_\\\\infty$, i.e. 0, 0.1, 0.2, 0.3, 0.4, 0.5 (the maximum possible value is 1). The attack success rates (ASR) are as follows:\\n\\n $\\\\|\\\\rho\\\\|_\\\\infty$|0|0.1|0.2|0.3|0.4|0.5\\n -|-|-|-|-|-|-\\n ASR(%)|98.9|99.2|98.9|98.9|98.9|98.7\\n\\n We can observe that different values of $\\\\|\\\\rho\\\\|_\\\\infty$ have very little impact on the attack performance. Figure 7 in Appendix D further illustrates some generated adversarial examples, among which we can see that a little bit more noises are added for bigger $\\\\|\\\\rho\\\\|_\\\\infty$ but the differences are very tiny when $\\\\|\\\\rho\\\\|_\\\\infty = 0.2$ to 0.5. So we simply set $\\\\|\\\\rho\\\\|_\\\\infty=0.5$ in experiments, but other values of $\\\\|\\\\rho\\\\|_\\\\infty$ (0.2, 0.3, 0.4) also work. \\n\\n We have added it in the revision. Thank you again for the valuable suggestion!\"}",
"{\"title\": \"Response to Review #2\", \"comment\": \"We appreciate the positive remarks that greatly encourage us, and the valuable suggestion made by the reviewer that have helped to improve the quality of our paper in the revised version.\\n\\n1.\\t**Q**: The reasons of using AC-GAN and WGAN-GP as the pre-training stage.\\n\\n **A**: There are two main reasons for adopting AC-GAN and WGAN-GP in the pre-training stage for our AT-GAN implementation. 1) In the literature, the combination of AC-GAN and WGAN-GP could build a powerful generative model and can craft realistic images on the evaluated datasets. 2) Song et al. [1] also utilize the same combination, and we follow their experimental setting for the fair comparison. \\n\\n But AT-GAN is not limited to the above GANs. Actually, all conditional GANs that can craft realistic examples could be used for the implementation of AT-GAN in the pre-training stage. For instance, we add experiments on CIFAR-10 using StyleGAN2-ada (StyleGAN2 with adaptive discriminator augmentation) [2], and illustrate some generated examples in Appendix D.3. We have clarified it in the revision.\\n\\n [1] Yang Song, Rui Shu, Nate Kushman, Stefano Ermon. Constructing Unrestricted Adversarial Examples with Generative Models. NeurIPS 2018.\\n\\n [2] Tero Karras, Miika Aittala, Janne Hellsten, Samuli Laine, Jaakko Lehtinen, Timo Aila. Training Generative Adversarial Networks with Limited Data. NeurIPS 2020.\"}",
"{\"title\": \"a straightforward idea\", \"review\": \"The paper proposes AT-GAN (Adversarial Transfer on Generative Adversarial Net) to train an adversarial generative model that can directly produce adversarial examples. Different from previous works, the study aims to learn the distribution of adversarial examples so as to generate semantically meaningful adversaries. AT-GAN achieves this goal by \\ufb01rst learning a generative model for real data, followed by transfer learning to obtain the desired generative model. Once trained and transferred, AT-GAN could generate adversarial examples directly for any input noise, denoted as non-constrained adversarial examples. Some experiments and visualizations show that AT-GAN can generate some diverse adversarial examples that are realistic to human perception, and yields higher attack success rates against adversarially trained models.\\n\\nOverall, the idea seems straightforward. Benefiting from the GAN, the proposed model could learn the distribution of adversarial examples to attach the target models. The paper is clearly written and some experiments are conducted. However, I have some concerns as below:\\n\\n1. In the loss function, $\\\\rho$ controls the difference between the outputs of the original and attach GANs, it is expected to see the performance and generated examples with different $\\\\rho$. \\n\\n2. The idea seems incremental. The main contribution is to transfer a pre-trained GAN to attach GAN to fool the classifiers. The novelty could be further summarized by highlighting the difference with most related works including but not limited to the aforementioned ones. The current manuscript makes the work seem like a straightforward combination of many existing approaches. \\n\\n3. Some experiment settings are not clear. A brief introduction to Model A to B should be given in the main paper, though the details is provided in Appendix.\\n\\nAs most concerns of mine are addressed by the rebuttal and I would like to rise my score.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"A meaningful solution to find non-constrained adversarial examples by using adversarial transfer on generative adversarial net\", \"review\": \"This paper proposed the adversarial transfer on generative adversarial net (AT-GAN) to train an adversarial generative model that can directly produce adversarial examples. In the other way, AT-GAN could generate the adversarial examples directly for any input noise. Such a generative model was able to draw non-constrained adversarial examples.\", \"pros\": \"This paper is clearly written with reasonable paper organization covering background, model design, mathematical formula and experiments. The goal of this work is obvious with experimental justification. Mathematical description and experimental illustration are desirable to show the merit of this method.\", \"cons\": \"The reasons of using AC-GAN and WGAN-GP as the pre-train stage are missing.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"AT-GAN: An Adversarial Generative Model for Non-constrained Adversarial Examples\", \"review\": \"This paper is to train a generative neural networks that can output adversarial examples. The main idea is to first train a normal GAN and then use the idea of transfer learning based on adversarial examples. The aim sounds good but the authors fail to clearly distinguish the idea with the exiting related methods theoretically or numerically. The idea of transferring is good (although not new), but after checking the implementation details, I have to say in the current version, the fact of transferring is quite limited.\", \"details\": \"+ the idea of generating adversarial examples by a trained GAN is interesting.\\n+ the writing is quite clear.\\n- lack of comparison with existing related methods. \\n Consider the core formulation, namely (2), which well describes the idea of this authors. But it is necessary to consider the following ideas: \\n\\n 1). generating adversarial permutation (AdvGAN, AI-GAN): min_G \\\\| G(z,y) \\\\|_p, s.t., f(z+G((z,y)) = y_t \\\\neq y_s.\\n It is to train the difference of G_original and G_attack and I think in the training aspects, this is almost equal to the proposed idea. The authors try to argue that the proposed model does not require an input. But in my opinion, no input is a disadvantage: if only adversarial examples are needed, AdvGAN etc. can feed an random input to original GAN and then add perturbations; but if one wants to attack a specific image, the proposed method will fail. \\n\\n 2). attack a GAN to generate adversarial examples (Song's): min_z' \\\\|z - x\\\\|, s.t., f(G(z,y)) \\\\neq f(G(z'),y). \\n The author may argue the Song's attack procedure takes longer time. However, the there is no training time additionally needed . Moreover, I guess the generating capability of Song's idea, which relies on the GAN and there are many well-designed ones, is better than the proposed one. I would like to see the generating performance of the proposed method on more complicated datasets, e.g., on CIFAR or other HIGH-RESOLUTION images. Another good point of Song's idea is that almost all the attacks on images could be parallelly used. I do not know whether its ASR could be easily improved. \\n\\n- The idea of transferring the original GAN to the attacking one is interesting. However, except of using the original GAN as the starting point, I cannot find other facts of \\\"transferring\\\". I would like to know if transferring learning technique could be used to reduce the number of required adversarial examples. \\n\\n- The attack transferbility has not been tested. Since there is adversarial samples involved, the obtained GAN is expected to be related to the victim model. \\n\\nAdditional questions, mainly for the experiments' result \\n1. It is good that attack performance on adversarial trained NN is included. But where the adversarial examples come from? Are the examples are generated by AT-GAN?\\n\\n2. How many examples and time are needed to train the AT-GAN?\\n\\n3. Since the GAN has been changed, how about the generating capability, i.e., generating failure ratio of the AT-GAN should be reported.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
OGg9XnKxFAH | Training independent subnetworks for robust prediction | [
"Marton Havasi",
"Rodolphe Jenatton",
"Stanislav Fort",
"Jeremiah Zhe Liu",
"Jasper Snoek",
"Balaji Lakshminarayanan",
"Andrew Mingbo Dai",
"Dustin Tran"
] | Recent approaches to efficiently ensemble neural networks have shown that strong robustness and uncertainty performance can be achieved with a negligible gain in parameters over the original network. However, these methods still require multiple forward passes for prediction, leading to a significant runtime cost. In this work, we show a surprising result:
the benefits of using multiple predictions can be achieved 'for free' under a single model's forward pass. In particular, we show that, using a multi-input multi-output (MIMO) configuration, one can utilize a single model's capacity to train multiple subnetworks that independently learn the task at hand. By ensembling the predictions made by the subnetworks, we improve model robustness without increasing compute. We observe a significant improvement in negative log-likelihood, accuracy, and calibration error on CIFAR10, CIFAR100, ImageNet, and their out-of-distribution variants compared to previous methods. | [
"Efficient ensembles",
"robustness"
] | Accept (Poster) | https://openreview.net/pdf?id=OGg9XnKxFAH | https://openreview.net/forum?id=OGg9XnKxFAH | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"tObSf3QKg2",
"515r0KUngN",
"ChQldYeCZ7s",
"lGwRno3n60",
"16tgzLWjm27",
"hDgCzaHQRBU",
"iKo7vopEvl-",
"QmT9IAdEHfO",
"5o6eaFZLv_",
"0FluA_PN3gR",
"_Px3yjPuMoE"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040423352,
1606254721185,
1606178958528,
1605594762319,
1605594251986,
1605593958398,
1605593677412,
1604387480036,
1604054490855,
1603934732573,
1603336305277
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3498/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3498/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3498/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3498/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3498/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3498/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3498/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3498/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3498/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3498/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Poster)\", \"comment\": \"This paper proposes a simple but effective method to obtain ensembles of classifiers (almost) for free.\\nEssentially you train one network on multiple inputs to predict multiple outputs. The authors show that this leads to surprisingly diverse networks - without a significant increase in parameters - which can be used for ensembling during test time. \\nBecause of its simplicity, I can imagine that this approach could become a standard trick in the \\\"deep learning tool chest\\\". \\n\\n-AC\"}",
"{\"title\": \"Addressing the concerns\", \"comment\": \"Thank you again R3 for your detailed feedback. We have incorporated them in our revision. If you still have concerns, please let us know. This is the last day for us to reply so we hope we can address any lingering questions.\"}",
"{\"title\": \"Rebuttal Edits\", \"comment\": [\"In response to the reviews, we made a few changes to the paper.\", \"We clarified where the 1% increase in model parameters come from and also included numbers for FLOPs (Reviewers 3 and 4)\", \"We clarified the source of independence in the introduction (Reviewer 1)\", \"We clarified our notation with respect to random variables and their instantiations (Reviewer 4)\"]}",
"{\"title\": \"Addressing the concerns\", \"comment\": \"Thank you for the review, we are glad that you found our work interesting and impactful.\\n\\nTo address the main concern, the 1% increase in model parameters comes from the extra parameters needed in the first and last layers of MIMO. The first hidden layer requires M times more weights for the M inputs and the output layer also requires M times more weight for the M outputs. Of course, the number of extra weights is architecture dependent, but to give an example, in the case of M=3/ResNet28-10/Cifar-10 they account for 1% additional model parameters. We are clarifying this point in the paper.\\n\\nWe report results on three commonly used image classification benchmarks, Cifar-10, Cifar-100 and ImageNet, with 10, 100 and 1000 classes respectively. It would be interesting to see further datasets and tasks, but we leave this for future works.\"}",
"{\"title\": \"Addressing the concerns\", \"comment\": \"Thank you for the detailed feedback, we are glad that you found our work interesting. We would like to address each weakness in order.\\n\\nWe briefly touched on the relationship between MIMO and BatchEnsemble in our related works section and we are expanding on this a little bit. The reason that MIMO has better diversity is that the subnetworks in MIMO learn to ignore the inputs to the other subnetworks, whereas in BatchEnsemble, it is quite likely that the subnetworks share each other's features due to the high level of parameter sharing.\\n\\nThe key to independence is that the subnetworks do not share features, since features derived from one input contain no useful information for classifying another. As a result, the subnetworks learn a more compact, independent representation for each input, and they learn to classify their corresponding inputs while ignoring the other inputs. We are updating the introduction to bring more attention to this key detail.\\n\\nBeyond the intuition for independence, we present two forms of empirical analysis to support our claims. First, we look at the loss landscape of the networks and find that the subnetworks converge to different local minima in weight space (Figure 3) and in function space (Figure 4, t-SNE plot). Second, by analysing the conditional variance of the activations within the network, we show that the subnetworks separate within the network and do not share features (Figure 4). We agree that it would be interesting to theoretically analyze the emergence of independence, but leave this for future work.\"}",
"{\"title\": \"Addressing the concerns\", \"comment\": \"Thank you for the detailed review, we are glad that you found the paper interesting.\\n\\nRegarding the first reservation, we investigate theoretical questions in the regression case where we show that the advantage of MIMO boils down to a bias-variance tradeoff. This analysis showed that MIMO is particularly effective when the network has excess capacity that MIMO can make use of. We leave further theoretical analysis for future works.\\n\\nIn terms of practical advantage, MIMO significantly improves robustness without increasing the computational costs. We agree that if the test-time computational requirements allow for using a deep ensemble, MIMO might not be the optimal choice, however, when the test-time compute is limited, such as the case in self-driving cars, or large-data applications, MIMO offers considerable advantages over using a standard deep neural network.\\n\\nIndeed, at test time we repeat the same input to obtain M predictions. This can occur during training (with a small probability), so this is not out-of-distribution for the model. The model behaves as expected, because the subnetworks learn to ignore each other\\u2019s inputs during training (since the other inputs provide no useful information for classifying their own input). \\n\\nWe can also permute the inputs. What we see is that they each try to classify their own inputs and ignore the others. Since the subnetworks operate independently, they give different predictions for the same input.\"}",
"{\"title\": \"Addressing the concerns\", \"comment\": \"Thank you for the detailed feedback. We are addressing the two main concerns, originality and computational costs, followed by the minor concerns.\\n\\n__Main concern - Originality:__ The core contribution of the paper is that by using a multi-input multi-output approach, we can train independent subnetworks within a network and obtain diverse predictions. This addresses the main drawback of previous multi-headed approaches [1,2]: their subnetworks\\u2019 predictions highly correlate when using only a single input, as shown in Section 3-4. Since in our setup, independence is ensured by the use of multiple inputs, MIMO also eliminates the need for architectural changes (such as in [1]) which further reduces computational costs. To the best of our knowledge, the idea of using multiple inputs for this purpose is novel, and as we show, it comes with significant computational advantages.\\n\\nWe discuss related work on multi-branch architectures in Section 5. We are keen to expand on this. Could you point us towards the works that we missed?\\n\\n__Main concern - Computational costs:__ In the paper, we claim that MIMO\\u2019s computational cost is equivalent to a standard deep neural network at test time. We support this claim by measuring and reporting the inference delay for all models. In Table 1, 2 and 3 the prediction time column refers to the time it takes to do inference on a single image (ms/example). This metric is calculated over a batch of images---64 for CIFAR10/100 and 128 for ImageNet---and then averaging. The inference delay of MIMO is identical to standard deep neural networks, since they both require a single forward pass for evaluation.\\n\\nTo further address the concern, the number of FLOPS for a ResNet28-10/CIFAR-10 is 10,559M and for MIMO (M=3) it is 10,561M (less than 1% difference). The only additional computational cost is processing the extra inputs and the extra outputs.\\n\\nRegarding batching, MIMO is fully compatible with batch evaluation. A single input to MIMO has the shape [W, H, MC], where W is the width of the image, H is the height and C is the number of color channels. The third dimension is formed by concatenating the M input images along the channel axis (in the supplied source code, this is done on line 90 in cifar_model.py). At evaluation time, to evaluate MIMO on a batch of 64 images, we first tile the images M times along the channels axis, forming a tensor of shape [64, W, H, MC], execute the 64 forward passes required for MIMO and report the results.\\n\\nWe do not believe that our results are deceptive, let us know in what ways MIMO requires clarification.\\n\\n__Minor concerns:__\\nNotation - we use bold characters to denote random variables and italic to denote their instatiations. Thank you for pointing this out, we are going to clarify this in the paper.\\n\\nRelated works section - we decided to place our related works section after the experiments so that the flow of the paper is uninterrupted. In our opinion, the method is easier to understand this way.\\n\\nIf our reply addressed some of your concerns, please consider updating your score.\\n\\n[1] Lee, Stefan, et al. \\\"Why M heads are better than one: Training a diverse ensemble of deep networks.\\\" arXiv preprint arXiv:1511.06314 (2015).\\n\\n[2] Tran, Linh, et al. \\\"Hydra: Preserving ensemble diversity for model distillation.\\\" arXiv preprint arXiv:2001.04694 (2020).\", \"edited\": \"Updated the number of FLOPS. Our results are in agreement with these results: https://github.com/osmr/imgclsmob/blob/master/chainer_/README.md (WRN-28-10)\"}",
"{\"title\": \"This paper presents an approach to use a multi-input multi-output configuration for training multiple subnetworks with independent tasks. The authors claim that by ensembling the predictions (output of subnetworks) they can improve model robustness without additional computational cost.\", \"review\": \"Authors assessed how these subnetworks can be as diverse as independently trained networks. The contribution of this paper is in proposing an approach to improve uncertainty estimation and robustness with minor changes (1 percent) to the number of parameters and compute cost.\", \"strengths\": \"Training multiple independent subnetworks within a network, with minimal increase in number of parameters. \\nThe use of MIMO makes this approach simple, while it can be evaluated in a single forward pass.\", \"concerns\": \"The authors claim that the benefits of using multiple predictions can be achieved \\u2018for free\\u2019, while their proposed model increases the number of parameters (even though by 1 percent)\\nThe paper has examined the accuracy and disagreement of the subnetworks, but a detailed evaluation on number of parameters is missing (i.e. where the 1% increase in parameters comes from).\\nAn experiment on more diverse datasets would be also helpful, such as OpenImages.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"A simple and effective way to train a single network as an ensemble of networks\", \"review\": \"SUMMARY:\\nThis paper describes a multi-input multi-output (MIMO) strategy for training several subnetworks inside a same and single neural network for robust prediction. The approach consists in jointly training the heads to make predictions for their corresponding inputs. The strategy is simple and demonstrates strong results. The experimental study reveals that subnetworks functionally behave as independent networks, hence resulting in a strong and robust ensemble.\", \"strengths\": [\"The experimental study is very thorough. I really appreciated the investigation in Section 3, which indeed convincingly shows that the subnetworks behave as independent.\", \"The method is compared on standard benchmarks, across a wide range of metrics. Experimental results show better performance than single forward pass methods. Performance is reasonable with respect to a simple solution consisting in training an actual ensemble of 4 networks.\", \"The paper is well written and easy to follow.\"], \"weaknesses\": [\"I believe the approach to be original, but its similarities/differences with other multi-input multi-output (such as BatchEnsemble) could have been discussed much further to better appreciate the originality of MIMO.\", \"Although independence between subnetworks is shown empirically, I cannot help but wonder how/why it emerges from the architecture! This is an exciting phenomenon that ought to be better understood. Nevertheless, I believe the actual independence of subnetworks should be nuanced at places (e.g., in the title or in the abstract), as this highly depends on the capacity of architecture and on the problem to solve -- as shown in the experiments themselves.\", \"Some (hypothetical) theoretical explanations regarding the emergence of independence would have made the paper stronger, although I realize this would be a whole new paper by itself.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"A clever idea and interesting empirical results\", \"review\": \"This paper proposes to train a single network with M input examples and M corresponding predictions, and the M input examples are mixed to produce the M corresponding predictions. Although only a single network is learned, it implicitly consists of multiple sub-networks due to the nature of multiple inputs and multiple outputs in training. In testing, the single testing example can be replicated M times as inputs, so that M outputs are produced by the trained network. The multiple outputs enable efficient ensembing for robust prediction.\\n\\nI find the proposed idea very clever. The empirical results on toy data and real data are interesting and compelling. It demonstrates that a single network has the capacity to contain multiple sub-networks, which is an interesting discovery in itself. \\n\\nI do have two main reservations. The first one is the lacking of some basic, not necessarily rigorous, theoretical formulation and analysis. The second one is about its practical potential. Apparently M has to be rather small, due to the limited capacity of a single network. Its advantage over training multiple networks may not be dramatic. \\n\\nI am also a bit concerned that the network is trained on M independent examples (although the proposed method does allow for occasional identical examples), but is tested on M identical copies of the same testing example. \\n\\nI am also unclear about the nature of mixing M independent training examples, and the effect of doing that. That is why I feel some theoretical understanding is needed. \\n\\nWhat if we permute the M training examples and check the difference of the corresponding outputs?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Limited novelty\", \"review\": \"Summary: The ensemble method MIMO is proposed in this paper to reduce the inference delay and keep the prediction diversity. Only one model with sufficient capacity exists in this method while multiple implicit subnets are embedded in the parent model. Each subnet has individual I/O, so only one forward pass of the parent model is needed to process all subnets and make the ensemble.\", \"quality\": \"Medium to high. Pros: 1) The dissections of the subnets are impressive. They design the experiments to survey the loss plane, the parameter/activation projection of different subnets. 2) With the proposed training paradigm and proper test setting, the accuracy improvement can be seen and also uncertainty estimation. 3) They report the SOTA results of accuracy, uncertainty, and robustness on various datasets and their OOD variants when considering the inference latency. Cons: 1) They only report the inference time of one sample, but the total computation costs (e.g. MACs or FLOPS) are omitted. 2) The multiple branch networks are not only used in ensemble and broader usages exist. I know at least three works which involves multi-branch architecture (output-wise or layer-wise) in robustness or knowledge distillation and some of them are also using the ensemble of multiple predictions. The lack of citations in the related work seriously declines the quality of this work.\", \"clarity\": \"High. Pros: 1) The method framework has a brief and clear explanation (Figure 1). 2) The training methods are conveyed in every detail, including some techniques like \\u201cinput repetition\\u201d. 3) Some critical plots reporting accuracy using error bars or box plots to display the performance variance. Cons: Some format mistakes of symbols are existing. For example, the authors mix the usage of the normal and italic font of \\u201cx\\u201d when referring to samples in different expressions; the chaotic usage can even happen in the same equation for the first one of Section 3.3.\", \"originality\": \"Low to medium. Pros: This method successfully uses the multi-branch architecture to reduce the inference delay in ensemble, which is a rather novel idea. Cons: As mentioned above, many similar multi-branch architectures have been used in different but related topics. Consider all of these works, the originality of this work has to be downgraded.\", \"significance\": \"Low to medium. Pros: The shining points of this work are making ensemble for \\u201cfree\\u201d. They do decrease the inference delay significantly. Cons: 1) The authors attempt to distract our attention on the computational costs of this MIMO architecture and try to make an illusion that it\\u2019s convenient in computing. The latency is decreased at the cost of batch size. Other ensemble methods have a longer delay, but they can process a batch of images. This method fills the batch with the same image which implicitly decreases the batch size. Furthermore, no measurement is used on computation costs or other related aspects. I think it\\u2019s deceptive and tricky. 2) As shown above, the originality of this work is not as solid as their experiments.\\n\\n[ Detailed comments]\\n1. What are the structural considerations for the related work of Section 5 not to be explained in Section 2?\\n2. In Figure 6, the \\u2018#\\u2019 marked on the abscissa of (b) is redundant\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
Mos9F9kDwkz | Complex Query Answering with Neural Link Predictors | [
"Erik Arakelyan",
"Daniel Daza",
"Pasquale Minervini",
"Michael Cochez"
] | Neural link predictors are immensely useful for identifying missing edges in large scale Knowledge Graphs. However, it is still not clear how to use these models for answering more complex queries that arise in a number of domains, such as queries using logical conjunctions ($\land$), disjunctions ($\lor$) and existential quantifiers ($\exists$), while accounting for missing edges. In this work, we propose a framework for efficiently answering complex queries on incomplete Knowledge Graphs. We translate each query into an end-to-end differentiable objective, where the truth value of each atom is computed by a pre-trained neural link predictor. We then analyse two solutions to the optimisation problem, including gradient-based and combinatorial search. In our experiments, the proposed approach produces more accurate results than state-of-the-art methods --- black-box neural models trained on millions of generated queries --- without the need of training on a large and diverse set of complex queries. Using orders of magnitude less training data, we obtain relative improvements ranging from 8% up to 40% in Hits@3 across different knowledge graphs containing factual information. Finally, we demonstrate that it is possible to explain the outcome of our model in terms of the intermediate solutions identified for each of the complex query atoms. All our source code and datasets are available online, at https://github.com/uclnlp/cqd. | [
"neural link prediction",
"complex query answering"
] | Accept (Oral) | https://openreview.net/pdf?id=Mos9F9kDwkz | https://openreview.net/forum?id=Mos9F9kDwkz | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"uhD0KidsT05",
"ZfDjFWyO6ye",
"SwsuiWChqrL",
"z_NUaD0stnp",
"9D0ueVgdUEH",
"ByZB__4hwC",
"F9dOacxrtgx",
"Cr9qg30TVsq",
"te9aaMuCX8T",
"mSaYMOeIjKi",
"BSnnFjCdUL-",
"KimDoIp84Rp",
"Fm1mCn1WNX4",
"CGldkk2zr0f",
"w_unFNZGW2i"
],
"note_type": [
"comment",
"comment",
"comment",
"comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1617375538864,
1617374785589,
1615928349210,
1615392005246,
1610040406511,
1606224955210,
1605537449614,
1605200964144,
1605199268723,
1605198353931,
1605198244174,
1604003412392,
1603949651508,
1603831587298,
1603558350138
],
"note_signatures": [
[
"~Pasquale_Minervini2"
],
[
"~William_W._Cohen2"
],
[
"~Pasquale_Minervini2"
],
[
"~Jiaxin_Bai1"
],
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3496/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3496/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3496/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3496/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3496/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3496/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3496/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3496/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3496/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3496/AnonReviewer4"
]
],
"structured_content_str": [
"{\"title\": \"Re: Faithful Embeddings for Knowledge Base Queries\", \"comment\": \"Hi William! Somehow we managed to miss this one, it's probably the most related work I've seen so far -- thank you for sharing it, we will include it ASAP.\"}",
"{\"title\": \"related work: Faithful Embeddings for Knowledge Base Queries\", \"comment\": \"I haven't read through your paper in detail yet - it looks very interesting! - but our NeurIPS paper https://arxiv.org/abs/2004.03658 is quite related - we also compared to Query2Box, and we also were able to get good performance by training only on simple queries.\"}",
"{\"title\": \"Re: Questions about the optimization methods\", \"comment\": \"Hi Jiaxin!\\n\\n> Generally, the paper said only the atomic queries are used to train the model. What are atomic queries? Are they 1-projection queries? If the model is only trained on 1-projection queries, it is basically training a link predictor. Then why you describe two optimization methods for complex queries? This is really confusing. Or actually, the model was trained on multiple complex query types?\\n\\nYes, we train a neural link predictor to be able to answer atomic queries! The problem is that you still need to find the optimal variable assignments for each query -- depending on how you cast it, you can see either as a combinatorial optimisation problem (if you search for the optimal variable-to-entity mapping) or a continuous optimisation problem (in case you search for the optimal variable-to-entity embedding mapping), hence the two optimisation methods.\\n\\n> What is your inference method for your model?\\n\\nThe two optimisation methods are used for inference!\\n\\n> Why not just use a pre-trained link predictor?\\n\\nWe do that! :) We had to experiment with several configurations to find the optimal hyperparameters. Then we used the best models we trained for answering complex queries (we uploaded all models online, the link is available on the GitHub repo).\\n\\n> Will you release the code for the experiments?\\n\\nYes, it's online! The link should be visible in the abstract.\"}",
"{\"title\": \"Questions about the optimization methods\", \"comment\": \"I am a researcher that is interested in the area of complex query answering. However, the descriptions of the experiments are so vague that it is hard to reproduce the experiment according to the paper only. So I have to ask some questions here about the experimental settings and evaluation details.\\n\\n\\n1. Generally, the paper said only the atomic queries are used to train the model. What are atomic queries? Are they 1-projection queries? If the model is only trained on 1-projection queries, it is basically training a link predictor. Then why you describe two optimization methods for complex queries? This is really confusing. Or actually, the model was trained on multiple complex query types?\\n\\n2. What is your inference method for your model? It seems that your model can be optimized in two ways as mentioned in the paper. But during the inference time, only the beam search method can be used to do inference. My question is whether both CQD-CO and CQD-Beam models are evaluated by the Beam search method.\\n\\n3. Why not just use a pre-trained link predictor? It seems that a pre-trained link predictor can be also used to do inference by using beam search. Also, I think this might be the most appropriate baseline to evaluate the effectiveness of both optimization methods. \\n\\nAlthough I am really confused by some descriptions in the paper, I really think this is a great paper. Because it points out an important new direction of solving complex query answering. \\n\\n\\nWill you release the code for the experiments?\"}",
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Oral)\", \"comment\": \"The reviewers unanimously agree that this paper is a strong accept; it makes important progress in developing our ability to query relational embedding models.\"}",
"{\"title\": \"Response to Reviewer 2 (2)\", \"comment\": \"- The proposed two optimization methods are independent of neural link predictors. Can you use the same neural link predictor for your models and GQE for fair comparison?\\n\\nWe just updated our submission with additional experiments on adopting DistMult rather than ComplEx as the underlying neural link prediction model - results are available in the appendix (Sect. C). We find that results with DistMult are slightly less accurate than with Complex, as we expected, while still more accurate than the GQE and Q2B baselines.\\n \\nA bilinear interaction model similar to DistMult was also considered as a projection operator by GQE, with significantly less accurate results. We believe that our improvements in terms of accuracy can be attributed not just to the use of a competitive neural link predictor, but also to the compositional nature of our model, which allows it to generalise from atomic to complex queries thanks to the use of t-norms and t-conorms.\"}",
"{\"title\": \"thanks for the clarifications\", \"comment\": \"Your comment cleared my doubts\"}",
"{\"title\": \"Response to Reviewer 4\", \"comment\": \"Thank you for your valuable comments and feedback. Please find our response to your questions next. We did incorporate your remarks about the presentation in the updated version.\\n\\n- 1-chain appears here for the first time. \\\"In all cases, we use a rank of 500.\\\": for rank do the authors mean the embedding size?\\n\\nYes! We do refer to the embedding size as the rank - we clarified this in the updated version.\\n\\nWe searched for the optimal embedding size by tuning it on a held-out validation set - we now provide a detailed description of the hyperparameter search process, as well as results with different embedding sizes (see Appendix, Sec. A).\\nOverall, we note that even with a lower rank of 100, our method still produces more accurate ranking results than baselines with larger embedding sizes (GQE and Q2B).\\n\\n- From a technical point of view the article seems sound but the authors say that \\\"Then, after we identified the optimal representation for variables A,V_1,\\u2026V_m, we replace the query target embedding e_A with the embedding representations e_c\\u2208Rk of all entities c\\u2208E, and use the resulting complex query score to compute the likelihood that such entities answer the query.\\\" [..] isn't there a method to exploit the information in eA?\\n\\nIndeed, we discard the embedding of the target variable, because ultimately our aim is to score actual entities in the KG. \\nIn a previous iteration of the project, we were ranking entities according to their distance from the vector e_A. However, we quickly realised this makes strong assumptions on the geometry of the embedding space induced by the neural link predictor, and also produced less accurate results.\\n\\n- Page 7: \\\"Since a query can have multiple answers, we implement a filtered setting, whereby for a given answer, we filter out other correct answers from the ranking before computing H@3.\\\": this sentence is not clear. Does it mean that answers that follow from the KG without completion are removed from the ranking?\", \"we_use_the_same_evaluation_protocol_as_gqe_and_q2b\": \"when ranking the candidate answers for a query and the gold answer is x, we remove the other entities that correctly answer the query and are different from x. This setting is used for not penalising the model for ranking other correct answers higher than x, since all these answers are valid.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thank you for your valuable feedback and comments! We next address your comments.\\n\\n- What is particularly important/challenging about EPFO queries, beyond existential and conjunctive ones?\", \"we_consider_a_query_as_determined_by_a_series_of_arbitrary_constraints_expressed_in_some_language\": \"a more expressive language makes it possible to answer a broader set of queries.\\n\\nThe simplest case is link prediction, where the constraint is a single predicate, whereas recent methods have started considering existentially quantified variables and conjunctions of predicates [1] and disjunctions [2], which together with conjunctions form EPFO queries). Considering EPFO queries is thus a step towards answering increasingly more expressive queries.\\n\\n- Could you talk more, give more insights about the 8 complex queries types? Why are they important?\\n\\nThe 8 query structures that we experiment with allow us to compare with other works in the complex query answering literature that use such queries for evaluation - see e.g. [1, 2].\\n\\n- The query \\u201cWhat international organisations contain the country of nationality of Thomas Aquinas?\\u201d sounds really artificial. Maybe there is a better example involving entities and relations, similar to the drugs one?\\n\\nThat\\u2019s right - thank you for pointing this out: we added a more realistic example in the updated version of the paper.\\n\\n- Could you say a bit more with respect to how the KG incompleteness is accounted for in the evaluation?\\n\\nThe queries we evaluated on are standard datasets proposed and used by e.g. [1, 2]: some edges are removed at random from the KG, and the queries are generated in such a way that one needs the missing edges in order to answer them. Then, a neural model is trained to answer the queries while accounting for the missing edges.\\n\\n[1] Hamilton et al. 2018, \\u201cEmbedding Logical Queries on Knowledge Graphs\\u201d, NeurIPS 2018.\\n\\n[2] Ren et al. 2020, \\u201cQuery2box: Reasoning over Knowledge Graphs in Vector Space using Box Embeddings\\u201d, ICLR 2020.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"Thank you for your questions and valuable feedback.\\n\\n- I think that there is an excessive mathematical formalism that is a bit unnecessary. There is no reason for that, the fact that the idea is very simple does not mean that we have to add extra formalism.\\n\\nIndeed, we agree with you: we tried our best to explain our method as simple as possible (by providing plenty of examples and visual intuitions), while still using the same terminology as related work in this area (to unambiguously specify what kind of queries our method can answer), and without loss in generality.\\nWe also think that the notation allows us to conveniently re-state the problem as an optimisation problem where scores are computed using t-norms and t-conorms.\\nWhich parts do you think that can be improved in terms of clarity?\\n\\n- In the introduction, the authors claim they use less data than the other methods, but they don\\u2019t make it very clear in the experimental section. I think they need to be more explicit about that. They need to clarify that less data means just the 1-hop queries.\\n\\nThank you for pointing this out, we agree that we could have been more explicit about the fact that our method only requires 1-hop queries for training: we have added details about this and actual numbers to contrast with the amount of data required by other methods.\\n\\n- I have two reservations about the paper. I suspect that part of the success of their approach is the ComplEx embeddings.\\n\\nWe agree that ComplEx is a significant contributor to the statistical accuracy in our model, which we chose since it is a fairly simple but still extremely competitive neural link predictor [1]. As an additional analysis, in the updated version of the paper (Tab. 3) we report results with ComplEx with different rank values (embedding sizes), showing we can significantly reduce the embedding size in the underlying neural link predictor without losing too much in ranking accuracy. Furthermore, we are now in the process of running additional experiments with DistMult, another neural link prediction model, which should be ready in the next two days.\\n\\n[1] \\u201cYou CAN Teach an Old Dog New Tricks! On Training Knowledge Graph Embeddings\\u201c - ICLR 2019, https://openreview.net/forum?id=BkxSmlBFvr\\n\\n- The other concern is about timing results. It would help to know how the whole algorithm compares in terms of time to answer the query compared to the others. I think it is of particular interest the difference between the two optimization techniques. I suspect that the greedy one might be two slow for longer chains. From the preliminary analysis, it seems to grow as $k^d$ where k is the width of the beam per relation and d is the length of the chain.\\n\\nThank you for pointing this out as well - we just included accurate timing results in the updated version of the paper (Appendix, Sect. B). We found that the time required for the combinatorial optimisation is on par or higher than Q2B, but always below 50ms per query.\", \"indeed_the_greedy_algorithm_tends_to_be_slower_on_longer_chains\": \"the neural link predictor is invoked once for each of the hops in the chain, for obtaining a list of top-k candidates to use in the next step in the chain. In our experiments, identifying the top-k candidates for each step in a chain was not an issue, since all candidate entities can be scored in parallel very efficiently on GPU.\\nA potential bottleneck in terms of space complexity can be the number of candidate variable assignments, which at the moment is given by $k^d$ (we did not experience any issues related to this, since in the datasets we considered $d$ is at most 3). A solution for handling longer chains may consist in trading complexity with completeness, and e.g. set an upper bound to the number of candidate variable assignments being considered.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"Thank you for your questions and valuable feedback.\\n\\n- For the first method, continuous optimization in sec 3.1, what is the difference between this method and the previous works GQE, Q2B, etc. apart from different neural link predictors? \\n\\nGiven a complex query, GQE and Q2B produce an embedding representation of such a query and use it for ranking all candidate answers according to a matching score between the query and the answer embeddings. On the other hand, our model decomposes a complex query into simpler (atomic) queries, which are answered individually using a neural link predictor, and then intermediate scores are aggregated using t-norms and t-conorms -- continuous relaxations of the logical conjunction and disjunction operators.\\n\\nBy doing so, we are also able to produce explanations for why a given answer was selected in terms of the intermediate answers for the atomic queries -- we elaborate on this aspect in the updated version of this paper.\\n\\nYou mention that \\u201cEspecially for path queries, e.g., given a two-hop query, (Obama,BornIn,V1) \\u2227 (V1,CapitalOf,V2), then the optimal e_V1 will be eObama+eBornIn\\u201d. We completely agree with this: if we select TransE as our underlying neural link predictor, indeed CQD-CO would be quite related to the model you just proposed.\\n\\nHowever, doing the same for other neural link predictors such as DistMult and ComplEx is not as simple, since the values of e_V1 and e_V2 identified by optimising the query score would not be meaningful (it\\u2019s possible to maximise the query score by just increasing the norm of e_V1 and e_V2). Our aim is proposing a solution that is not model-dependent (i.e. that can be used with any neural link predictor); with interesting explainability properties; and does not require training on complex queries (we can aggregate intermediate results with t-norms and t-conorms, without learning additional parameters) while still achieving SOTA results.\\n\\n- For the second method, the time complexity seems exponential with respect to the number of hops.\\n\\nIndeed, for a multi-hop query, the second method produces $k^m$ variable assignments -- this was not an issue in our experiments since in the complex query answering datasets considered by GQE and Q2B, the m in multi-hop queries is at most three. We argue that, for higher values of m, we can control the space complexity of the method by using a more space-efficient variant of the method, where the number of variable assignments is bounded to some constant.\\n\\nFurthermore, answering each hop for and identifying the top-k candidates can be done in constant time on GPU, since all candidate entities can be scored in parallel very efficiently (in ComplEx, the scoring function is a trilinear dot product, thus scoring all entities can be reduced to a matrix-vector multiplication).\\nWe updated our paper including actual timing measurements in the appendix.\\n\\n- How did you calibrate the output of ComplEx so that $\\\\phi_{p}(e_{s}, e_{o}) is in [0,1]?\\n\\nWe use the sigmoid to map ComplEx scores to values in [0, 1].\\n\\n- Can you list the inference time of both models (continuous, combinatorial) and compare it with GQE/Query2box?\\n\\nWe included explicit timing results for combinatorial optimisation in the updated version of the paper - see Sect. B in the Appendix. For the continuous optimisation version, we found that it can take between 1 to 10 seconds to answer all the queries in FB15k, FB15k-237, and NELL, thanks to the fact that those operations can be efficiently parallelised on GPU.\\n\\n- For 1p performance, it is equivalent to the performance of ComplEx on link prediction right?\\n\\nYes, the performance of our method on 1p (atomic) queries is determined by the accuracy of the neural link predictor. However, upon further investigation, we noticed that Q2B uses a different evaluation procedure for atomic queries: we updated the results table, and we find that our method produces more accurate results on atomic queries as well.\\n\\n- The table 3 is confusing, why are the numbers (e.g., 5.5, 46.76) larger than 1, I think the model normalized the output of $\\\\phi$ to [0,1]?\\n\\nThank you for pointing this out -- we reported the logits rather than the normalised values. We solved the issue in the updated version of the paper, and added an additional example.\\n\\n- The proposed two optimization methods are independent of neural link predictors. Can you use the same neural link predictor for your models and GQE for fair comparison?\\n\\nWe chose ComplEx because it is a very simple yet extremely effective neural link predictor, but indeed it would be interesting to test our method with different models. To account for this, in the updated version of the paper we included experiments with different ranks for ComplEx - showing we can decrease the rank from 1000 to 100 without significantly decreasing the predictive accuracy of the model - and are now in the process of running additional experiments with DistMult (which is also considered in GQE).\"}",
"{\"title\": \"Complex logical query evaluation with link predictors\", \"review\": \"Summary:\\nThis paper proposes Continuous Query Decomposition (CQD) a novel method for evaluating complex queries over incomplete KGs. Each variable of a logical query (involving existential quantifiers, conjunctions and disjunctions) is mapped to an embedding. A link predictor, trained on single edge prediction, is used to score the atomic query involving the variable. The full query is evaluated using continuous versions of the logical operators and gradient-based or combinatorial optimization.\\nEvaluating complex logical queries on (necessarily incomplete) KGs and other graph-structured data is an important problem for data mining purposes. The paper proposes an elegant and effective method. \\n\\nStrong points\\nElegant, efficient solution.\\nSOTA results.\\nProvides aspects of explainability, although this could be discussed and illustrated better.\\n\\nDetailed comments\\n- What is particularly important/challenging about EPFO queries, beyond existential and conjunctive ones? Obviously it is an extension that covers more FOL, but a qualitative discussion would help the reader, particularly with respect to applications to KGs.\\n- Could you talk more, give more insights about the 8 complex queries types? Why are they important?\\n- The query \\u201cWhat international organisations contain the country of nationality of Thomas Aquinas?\\u201d sounds really artificial. Maybe there is a better example involving entities and relations, similar to the drugs one?\\n- Could you say a bit more with respect to how the KG incompleteness is accounted for in the evaluation?\\n- The paper mentions \\u201c.. in many complex domains, an open challenge is developing techniques for answering complex queries involving multiple and potentially unobserved edges, entities, and variables, rather than just single edges.\\u201d It would be great to articulate this more for sake of providing context and motivation.\", \"rating\": \"9: Top 15% of accepted papers, strong accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"A new method on reasoning on KG, nice empirical results\", \"review\": \"The paper aims to answer complex queries on knowledge graphs. Different from previous methods that aim to embed the queries, the method views the query answering problem as an optimization / search problem where the goal is to find the most plausible entities on the reasoning path. The merits are that the method only needs to train on 1 hop path queries (link prediction), saving the effort of training on complex queries as in previous work, and proposes two solutions, which both achieve nice results on standard multi-hop reasoning benchmarks. It also demonstrates interpretability of the model by showing some examples of the intermediate entities found in the reasoning path when answering a complex multi-hop query.\\n\\nI think the paper is clear and easy to follow. I have some questions regarding the two methods. For the first method, continuous optimization in sec 3.1, what is the difference between this method and the previous works GQE, Q2B, etc. apart from different neural link predictors? Especially for path queries, e.g., given a two hop query, $(\\\\text{Obama}, \\\\text{BornIn}, V_1)\\\\wedge(V_1, \\\\text{CapitalOf}, V_2)$, then the optimal $e_{V_1}$ will be $e_{Obama}+e_{BornIn}$, because the distance will be 0, and $\\\\phi_p$ will be 1 (here it assumes TransE model, and of course it can be generalized to DisMult, ComplEx, etc.). Then the first formulation is in essence very much similar to GQE, because GQE/Q2B also models $\\\\mathbf{e}_{V_1}$ in the exact same way and the difference only lies in (1) you use ComplEx (2) t-norm modeling of conjunction? However, it seems that t-norm demonstrates less expressiveness for modeling conjunction because both GQE/Q2B models conjunction using a MLP with additional learnable parameters, which can also approximate t-norm and even be more adaptive depending on the training queries/KG.\\nFor the second method, the time complexity seems exponential with respect to the number of hops. For a m hop query, and each step you keep the top-k, then do you end up with $k^m$ entities?\", \"additional_questions\": \"1. How did you calibrate the output of ComplEx so that $\\\\phi_p(e_s, e_o)$ is in [0,1]? Better to add more details on neural link predictors. \\n2. Some ablation studies that use different t-norm and t-conorm other than the Godel and product may make the argument stronger.\\n3. There exists a tradeoff between the inference time and training queries. For GQE/Q2B, they can leverage complex queries to train the conjunction operator (MLP), so that during inference, there is no need to do any optimization. But for the proposed method, it saves the effort of training on complex queries, however, during inference, the method needs an online optimization process to instantiate the variables on the path. Especially for CQD-CO, the authors mention that they need to optimize online for 1000 iterations, which is too expensive for answering a query. Can you list the inference time of both models (continuous, combinatorial) and compare it with GQE/Query2box?\\n4. For 1p performance, it is equivalent to the performance of ComplEx on link prediction right?\\n5. The table 3 is confusing, why are the numbers (e.g., 5.5, 46.76) larger than 1, I think the model normalized the output of $\\\\phi$ to [0,1]?\\n6. The proposed two optimization methods are independent of neural link predictors. Can you use the same neural link predictor for your models and GQE for fair comparison? You can train a TransE model for the neural link predictor, and accordingly define $\\\\phi_p$, then it will be clear to show whether the gain comes from a different neural link predictor (TransE vs ComplEx), or comes from the t-norm and the two optimization methods. And of course another choice is the other way around, e.g., use the ComplEx version of GQE and make the same comparison.\", \"minor_points_to_fix\": \"1. In the method section, bold $\\\\mathbf{e}$ denotes vector embedding while normal $e$ denotes a logic formula, which is subtle and confusing. Authors can change the notation of one of them.\\n2. Also, the notation $e_i^j$ is abused, in Eq. 2, it represents a logic formula, however, in Eq. 3, it represents the output of $\\\\phi_p$, which is a scalar.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Surprisingly simple idea that seems to work\", \"review\": [\"The paper attempt to answer conjunctive queries that are in the form of a chain of facts bound together with unobserved variables. The authors suggest that you can use any relational learning method to embed entities and relations in a k-dimensional space and then use the t-norm in order to create a loss function that will be used in order to find the result of the query. The paper investigates continuous optimization through stochastic gradient descent and a greedy method for combinatorial optimization. The results demonstrate that the greedy optimization method performs better. In addition, they claim that their method outperforms other methods with the advantage of using less training data.\", \"Here are some comments\", \"I think the authors cover the relevant work sufficiently\", \"The idea is very simple and builds upon other work that is well studied and well understood by the community\", \"I think that there is an excessive mathematical formalism that is a bit unnecessary. There is no reason for that, the fact that the idea is very simple does not mean that we have to add extra formalism.\", \"In terms of generalization, I think it is very interesting that the users train only on 1-hop queries and evaluate up to 5-hop. In the introduction, the authors claim they use less data than the other methods, but they don\\u2019t make it very clear in the experimental section. I think they need to be more explicit about that. They need to clarify that less data means just the 1-hop queries\", \"I have two reservations about the paper. I suspect that part of the success of their approach is the ComplEx embeddings. I would appreciate an ablation study with at least one more method for relational learning, let\\u2019s say TransE to see how sensitive it is on the embeddings. To be fair the authors study in depth the performance of their algorithm in other variations, such as the length of the chain\", \"The other concern is about timing results. It would help to know how the whole algorithm compares in terms of time to answer the query compared to the others. I think it is of particular interest the difference between the two optimization techniques. I suspect that the greedy one might be two slow for longer chains. From the preliminary analysis, it seems to grow as k^d where k is the width of the beam per relation and d is the length of the chain.\", \"In general, I think it is a very practical paper\"], \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"nice improvement of the SOTA\", \"review\": \"The paper proposes Continuous Query Decomposition (CQD), an approach for answering Existential Positive First-Order (EPFO)\\nqueries over incomplete knowledge graphs exploiting a neural link predictor for 1-hop-only queries.\\nEntities are embedded in a low dimensional space and entity vectors are used to compute the score of query atoms that\\nare then combined using a t-norm for conjunction and t-conorm for disjunction.\\nAnswers to queries are found either with continuous optimisation by gradient descent to find embeddings for query variables\\nor combinatorial optimisation where top-k entities for query variables are looked for yielding a beam search.\\nCQD is compared with Graph Query Embedding (GQE) and Query2Box over three datasets on a large number of queries.\\nThe result show that CQD outperforms the baselines on Hit@3 on average.\\nCQD also offers the possibility of explaining the results of queries by showing the top scoring entities for query variables and the score of atoms.\\n\\nCQD tackles the difficult problem of answering queries that are beyond simple 1-hop completion queries. It improves\\nover previous work which need to train the model over a large number of queries (Hamilton et al., 2018;Daza & Cochez, 2020;\\nRen et al., 2020) and do not consider disjunctive queries (Hamilton et al., 2018; Daza & Cochez, 2020).\\nThese advantages are obtained by not embedding the query into a low dimensional space but using continuous or combinatorial\\noptimization to answer queries, considering the query as a formula in fuzzy logic and applying t-norms and t-conorms.\\nWhile the use of fuzzy logic in query answering is not new, they way in which it is combined with entity embeddings and\\nneural link predictors is original to the best of my knowledge.\\n\\nThe fact that queries are not embedded (and so learning does not need large numbers of queries) is a strong point of CQD,\\nwith competing methods (Hamilton et al., 2018;Daza & Cochez, 2020; Ren et al., 2020) requiring many queries for tuning the query embeddings.\\nSince queries are not embedded, the results of CQD are also easier to explain. \\n\\nThe experiments are sufficiently extensive to support the claim of the paper that CQD is also outperforming competitors in \\nterms of the quality of solutions. However, the authors should justify why they used embedding size 500 for their methods\\nand 400 for the baselines.\\n\\nFrom a technical point of view the article seems sound but the authors say that \\\"Then, after we identified the optimal \\nrepresentation for variables $A, V_1, \\\\ldots V_m$, we replace the query target embedding $e_A$ with the embedding \\nrepresentations $e_c \\\\in R^k$ of all entities $c \\\\in E$, and use the resulting complex query score to compute the \\nlikelihood that such entities answer the query.\\\"\\nIn this way the authors throw away vector $e_A$ that may have information about the problems, isn't there a method to\\nexploit the information in $e_A$?\", \"i_have_a_few_remarks_about_the_presentation\": \"Citation Raedt, 2008 should be De Raedt, 2008.\\nIn Figure 1 the edges of the graphs have the opposite direction with respect to the caption and main text.\", \"page_6\": \"\\\"we only make use of type 1-chain queries to train the neural link\\npredictor\\\": do the authors mean 1-hop queries? 1-chain appears here for the first time.\\n\\\"In all cases, we use a rank of 500.\\\": for rank do the authors mean the embedding size? This should be clarified\", \"page_7\": \"\\\"Since\\na query can have multiple answers, we implement a filtered setting, whereby for a given answer, we\\nfilter out other correct answers from the ranking before computing H@3.\\\": this sentence is not clear. Does it mean\\nthat answers that follow from the KG without completion are removed from the ranking?\\n\\n----After reading the other reviews and the authors' comments, I still think the paper is excellent and should be accepted.\", \"rating\": \"9: Top 15% of accepted papers, strong accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
tw60PTRSda2 | Understanding Mental Representations Of Objects Through Verbs Applied To Them | [
"Ka Chun Lam",
"Francisco Pereira",
"Maryam Vaziri-Pashkam",
"Kristin Woodard",
"Emalie McMahon"
] | In order to interact with objects in our environment, we rely on an understanding of the actions that can be performed on them, and the extent to which they rely or have an effect on the properties of the object. This knowledge is called the object "affordance". We propose an approach for creating an embedding of objects in an affordance space, in which each dimension corresponds to an aspect of meaning shared by many actions, using text corpora. This embedding makes it possible to predict which verbs will be applicable to a given object, as captured in human judgments of affordance, better than a variety of alternative approaches. Furthermore, we show that the dimensions learned are interpretable, and that they correspond to typical patterns of interaction with objects. Finally, we show that the dimensions can be used to predict a state-of-the-art mental representation of objects, derived purely from human judgements of object similarity. | [
"Affordance",
"affordance embedding",
"object representation"
] | Reject | https://openreview.net/pdf?id=tw60PTRSda2 | https://openreview.net/forum?id=tw60PTRSda2 | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"GQh3dPxJ7eU",
"SAQ_-0N_J-x",
"0_ulaZOg5_M",
"67dZyx6NQtD",
"zzqNZBWwmO",
"GICytvYSAu",
"5o6O8mAF50Y",
"Ij2ed3mfTS",
"AKiFIrCSoXq",
"DvLT4rqi_Xk",
"pRWUZuh_jNT",
"Ai4-u9SxA9B",
"o3brdGr9oln",
"YNLwLe5sqOe",
"eo1XfstqSgH",
"gh-kPC8iQPh"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040388420,
1606258351291,
1606258281430,
1606258164053,
1606258061776,
1606170047735,
1606169168759,
1606169082530,
1606168942364,
1605715507806,
1605646341096,
1605646185854,
1604170192554,
1603967626771,
1603728229074,
1603703443900
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3494/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3494/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3494/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3494/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3494/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3494/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3494/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3494/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3494/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3494/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3494/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3494/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3494/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3494/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3494/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This paper is a computational linguistic study of the semantics that can be inferred form text corpora given parsers (which are trained on human data) are used to infer the verbs and their objects in text. The reviewers agreed that the work was well executed, and that the experiments comparing the resulting representations to human data were solid. The method employed has little or no technical novelty (in my opinion, not necessarily a flaw), and it's not clear what tasks (beyond capturing human data) representations could be applied to (again, not a problem if the goal is to develop theories of cognition).\\n\\nThe first draft of the work missed important connections to the computational linguistics literature, where learning about 'affordances for verbs' (referred to as 'selectional preferences') has long been an important goal. The authors did a good job of setting out these connections in the revised manuscript, which the reviewers appreciated. \\n\\nThe work is well executed, and should be commended for relating ideas from different sub-fields in its motivation and framing. But my sincere view is that it does not meet the same standards of machine-learning or technical novelty met by other papers at this conference. It is unclear to me what the framing in terms of 'affordance' adds to a large body of literature studying the semantics of word embeddings, given various syntactically and semantically-informed innovations. It feels to me like this work would have been an important contribution to the literature in 2013, but given the current state of the art in representation learning from text and jointly learning from text and other modalities, I would like to have seen some attempt to incorporate these techniques and bridge the gap between the notion of affordance in text/verbs (selectional preference) and Gibson's notion of object affordance (what you can do physically with an object) in experiments and modelling, not just in the discussion. Such a programme of research could yield fascinating insights into the nature of grounding, and the continuum from the concrete, which can be perceived and directly experienced, to the abstract, which must be learned from text. I encourage the authors to continue in this direction. An alternative is to consider submitting the current manuscript to venue where the primary focus is cognitive modelling, and accounting for human, behavioural data, and where there is less emphasis on the development of novel methods or models.\\n\\nFor these reasons, and considering the technical scope of related papers in the programme, I cannot fairly recommend acceptance in this case.\"}",
"{\"title\": \"addressing the concerns on the methodologies, concepts and goal of the project (Part IV)\", \"comment\": \"> Evaluation 1 doesn't seem very meaningful to me. It seems self-evident that representations constructed on the basis of verb--object co-occurrence data will perform well in predicting object--action co-occurrences, and probably better than representations which are not specifically tuned exclusively for that language task.\\n\\nWe carried this evaluation for testing whether our embedding contained the right information, i.e. verb applicability in a broad way. The embedding for an object induces a weighting over verbs that is a combination of per-dimension weights. Given this, we think that testing whether weighting ranks the verbs in a way that is compatible with affordance judgments is a reasonable lower bound. We already discuss shortcomings of this approach at some length in the paper. We do not feel the results were self-evident a priori, which is also why we included methods that use dependency parse information (e.g. DBWE or NNSE).\\n\\n> The aim of this paper is not clear to me. It cannot argue for a superior system of word representation, since it does not evaluate these representations on any broad evaluation tests. It also doesn't make a convincing cognitive argument about the content of mental representations, given the conceptual and methodological issues in evaluation 2, discussed above. (A convincing cognitive argument would also need to draw on data from human behavior beyond the sort gathered on AMT, or possibly neural evidence; see Mitchell et al. (2008) as an example.)\\n\\nSPoSE is derived purely from behavioural data, and tested by having separate subjects predict SPoSE dimensions for novel objects, based on the dimensions for known objects. Additional tests feature predictions of certain judgements (e.g. typicality) or coherence of dimension labelling by subjects. We could certainly generate predictions about the same judgments in (Hebart et al. 2020); one could argue that, given how well we can predict the dimensions, it would be trivially easy to do so. \\n \\nFor a more thorough test of our embedding, we believe we would have to carry out dimension labelling experiments with a task such as \\\"given objects that load highly on this dimension, what can you do with them\\\". We could certainly ask subjects to produce these judgements, and confirm that they list the same verbs that score highly for that dimension. It would have been premature to run this experiment without the results shown in this paper, which we think is one argument for publishing them. Given that our embedding is based on language data about which verbs apply to which objects, we would expect these experiments to give verb loadings coherent with ours, to the degree that spoken language agrees with that in the corpus. We have added a mention of this in the discussion, as we believe this is a worthwhile direction to pursue.\\n \\nFinally, it is conceivable to test the embedding with imaging data; this is an area we have substantial experience on. Any such test would involve creating either 1) an encoding model (analogous to (Mitchell et al. 2008)), which would predict the imaging data for novel objects, based on their embedding vector; or 2) a decoding model, which would predict the embedding vector from imaging data of an object. Either approach would allow claims about whether certain types of information are present in particular regions in the brain (see \\\"Interpreting Encoding and Decoding Models\\\" by Kriegeskorte and Douglas 2018). However, the same issues with necessity and sufficiency that you raised earlier permeate the interpretation of the models (see \\\"Causal interpretation rules for encoding and decoding models in neuroimaging\\\" by Weichwald et. al. 2015). We would argue this is less interesting than behavioral predictions, given that we already know that category-level information is very decodable (from many studies reusing Mitchell 2008, or more recently shared imaging datasets with hundreds or thousands of objects, e.g. CMU BOLD5000). Non-categorical dimensions would be more interesting, and raise issues of both where the information is represented, and when it comes into play during various tasks (it might require the use of high-resolution fMRI or MEG, see \\\"Comparison of deep neural networks to spatio-temporal cortical dynamics of human visual object recognition reveals hierarchical correspondence\\\" by Cichy et al. 2015). Our interest in developing interpretable representations from text corpora, and other sources, is to try to leverage as much information and constraints as possible, prior to generating hypotheses for imaging experiments.\"}",
"{\"title\": \"addressing the concerns on the methodologies, concepts and goal of the project (Part III)\", \"comment\": \"> The claimed \\\"affordance space\\\" is not falsifiably *about* affordances in any deep sense. (...) The lower-dimensional basis may span the space according to \\\"modes of interaction\\\" as claimed, but equally likely may describe coherent categories of contexts/places in which the actions occur, or categories of agents which perform the action, for example.\\n\\nWe appreciate the concern of the reviewer, and the detailed explanation of their reasoning. Our original motivation was to look at affordance construed in a broad sense. Starting from canonical examples about motor actions applied to very specific objects (which are of concern to people in robotics, industrial design, psychology, for instance), we wondered if the obvious regularities there extended to a broader range of objects (e.g. including animals, insects, things that are not obviously manipulated) or verbs (e.g. more abstract, or focused on purpose, intent, etc). If the goal is to study lists of thousands of objects or verbs, obtaining human judgements is prohibitive, as well as fraught with complications (e.g. is a binary yes/no task removing information? how would they calibrate a verb/object compatibility score?). Given this, it makes sense to consider applications of verbs to objects in a corpus, and a factorization to extract regularities, as we discussed above. \\n \\nWe agree that it is subjective to describe the regularities we identified as \\\"modes of interaction\\\"; it is the best description we could conceive of, but not the only one. We appreciate the suggestion of alternative possibilities or confounds, for future work. Some might be testable based on other information in the dependency parses (e.g. subjects of the actions), but others might be confounded (e.g. food preparation with kitchen, hunting with outdoors, tool uses with workshop, etc). Note that the latter also appears to be the case for some SPoSE dimensions (e.g. marine with colour blue, plants with colour green). We have added your point to the discussion section, but we think ruling out the other possibilities is beyond the scope of a conference paper.\\n\\n> Evaluation 2 demonstrates a rough sort of sufficiency, but not necessity, of affordance knowledge for object feature knowledge.(...) Demonstrating necessity would require testing alternate representations, I think, which isn't reported.\\n\\nIn Evaluation 2, we show that affordance embedding dimensions work surprisingly well for predicting SPoSE dimensions. This is not trivially true for three reasons. First, we predict in a cross-validation, so overfitting to independent variables would lead to poor results in the held-out data. Second, the affordance dimensions are sparse, so they have to select subsets of objects. Third, as the regression weights are positive, the dimensions have to combine and cannot be traded off against each other, as in a normal regression.\\n \\nAs you correctly point out, this shows that affordance dimensions are sufficient to explain most SPoSE dimensions. Given that SPoSE dimensions are estimated to explain judgements of object similarity, they reflect many possible types of knowledge used in making those judgements. Hence, it is informative to consider *which* SPoSE dimensions can be predicted from affordance, and which cannot. As you correctly hypothesize, one could predict SPoSE from other types of features, e.g. sparse properties as in (Zheng et al. 2019), which introduce an early version of the representation. All that these predictions allow, given the three reasons above, is to show sufficiency for some representations but not others. For example, we could consider deriving something like our affordance embedding from object/adjective relations instead of object/verb ones. We would hypothesize this would provide more information about appearance or structural SPoSE features than the affordance embedding does (as can be seen partially in Table 2 and fully in Table 3).\\n \\nAll of this said, it is not possible to demonstrate that there are no other representations that would work as well as ours, beyond the caveat above. However, we can definitely say that we are aware of no other linear representation where dimensions are also interpretable in terms of which verbs would apply. We believe that, with additional behavioral experiments (e.g. asking subjects to produce applicable verbs for objects that score high in each of our dimensions), we could validate our embedding as a mental representation (at least to the same degree as SPoSE), and address this in further detail in an answer below.\"}",
"{\"title\": \"addressing the concerns on the methodologies, concepts and goal of the project (Part II)\", \"comment\": \"> the evaluation based on the raw affordance matrix (called \\\"PPMI\\\" in Table 1) underperforms the full model by a substantial amount, suggesting that the factorization introduces information not captured in the actual affordance data\\n\\nWe would like to answer this point before discussing the other ones, as we believe this is a misunderstanding. The two main reasons to conduct a matrix factorization are to capture patterns where verbs are applied to the same objects, and to de-noise the data while doing so.\\n \\nFactorizations of co-occurrence matrices are used extensively in computational linguistics, and we have tried to explain how our method departs from usual practices in the answer to the previous question. The role the factorization plays is not to introduce new information, but to introduce inductive bias in trying to distinguish what is noise and what is information. Further inductive bias can come from constraints, representing a priori knowledge, hypotheses, or assumptions (e.g. a sparse, positive factorization that works as well as a dense one will be more interpretable, or better for predicting sparse targets). Figure 3 aims to provide an illustration of the effect of using sparsity on interpretability.\\n \\nWith regards to noise, the Stanza dependency parser is very robust, but never 100\\\\% accurate. Even if it were, we would be operating on nouns and verbs with multiple possible meanings; going beyond this would require word sense disambiguation (something we are implementing at present). Finally, there are unusual combinations of nouns and verbs that, being infrequent, will bias the PPMI score to be higher than it should (Bullinaria and Levy, 2012). A factorization will lessen the effect of all of these noise factors, by forcing the model to only have the capacity to represent the most stable co-occurrence patterns. This is the case for SVD as well as NMF, and we use the latter, with sparsity, to achieve additional modeling goals as described above.\\n\\n> figure 2b actually shows that some of the dimensions of the affordance space best correlated with SPoSE dimensions are object-taxonomic properties.\\n\\nThis is correct. SPoSE has taxonomic dimensions that act as broad category indicators (e.g. animal, food, wearable, tool, etc), as those appear to drive many judgments of object similarity, and these often have direct counterparts in our embedding dimensions. A justification of this might be that categories have strong selectional preference effects over verbs that can be applied to them. Conversely, it is hard to imagine what commonality between verbs would pertain to objects loading on some of the more visual SPoSE dimensions (e.g. \\\"degree of red\\\" or \\\"colorful pattern\\\"). Note, also, that even those taxonomic dimensions are best predicted by a combination of our dimensions, using positive weights, rather than by individual dimensions.\\n\\n> table 2 confirms that \\\"structural\\\" and \\\"appearance\\\" features are some of the best predicted features from the affordance space.\\n\\nPlease note that Table 2 is only a selection of SPoSE dimensions that we picked for illustration of the range of predictability, together with the top 10 verbs in the predicted ranking. Table 3 in the Appendix lists all dimensions, and the top 50\\\\% are almost all taxonomic/categorical. To see why there would be a few structural or appearance ones, it helps to consider their verbs and the objects loading high on them; these are not shown here but are listed on the (Hebart et al 2020) paper describing SPoSE. For structural, \\\"made of metal\\\" (top verbs: fit, invent, manufacture, incorporate, design, position, attach, utilize, carry, install) tend to be mechanisms or parts, \\\"granulated\\\" (top verbs: contain, mix, scatter, add, gather, remove, sprinkle, dry, deposit, shovel) tend to be construction materials or ingredients. For appearance, \\\"textured\\\" (top verbs: remove, place, hang, tear, stain, spread, weave, clean, drape, wrap) and \\\"round\\\" (top verbs: grow, cultivate, pick, add, slice, place, eat, chop, throw, plant) mostly apply to clothing/fabric and fruit/vegetable items. Most appearance features that are poorly predicted do not clearly correspond to items belonging to a category, or combination of categories.\"}",
"{\"title\": \"addressing the concerns on the methodologies, concepts and goal of the project (Part I)\", \"comment\": \"We thank the reviewer for their thoughtful comments. We have taken the liberty of reordering your points, to try to group related ones and provide more concise answers.\\n\\n> Gibson (2014) should be Gibson (1979)\\n\\nThank you for pointing out this mistake, it was inadvertently introduced by the reference manager we used. We have corrected it in the revised manuscript.\\n\\n> Figure 1 is not very useful, either for assessing success of the method or for understanding its shortcomings. For the latter purpose, maybe consider showing the *residuals* of the regression?\\n\\nThe first purpose of Figure 1 is to give readers a sense of the (sparse) distribution of SPoSE dimension loadings across different object categories, for the densest dimensions. The second purpose is to illustrate the degree to which those dimensions can be predicted well, in a held-out fashion (i.e. we plot the predictions for each item when it was in the test set). We agree with the reviewer that showing the residuals of the regression would be useful to see their magnitude in relation to the original dimensions, and have added it in the revised version of the paper. Given that we cannot fit all dimensions in the current figure, we have also added a version of it with all dimension in the appendix (Figure 7).\\n\\n> Because I haven't closely followed the relevant literature, I can't speak to the originality of the embedding method. That being said, it doesn't seem like a substantial conceptual innovation to me.\\n\\nWhile we agree that factorization of a co-occurrence matrix is standard, we depart from this in a number of ways. Given that these ways are what leads to increased performance in the affordance prediction task, as well as interpretability of the embedding, we believe they are relevant and ask you to please consider them in detail.\\n \\nThe first difference from prior work is that we start from a matrix with counts of applications of verbs to nouns, instead of all verb-noun co-occurrence in every sentence of the corpus. This reduces the size of the dataset used in learning an embedding, suggesting that the data are cleaner, the embedding method is more data efficient, or both. \\n \\nThe second difference is the use of a sparse non-negative matrix factorization, as opposed to dense, real-valued factorizations such as SVD. This is necessary for producing embeddings that are interpretable, by virtue of each dimension not being present for most objects, and loading sparsely across verbs. Furthermore, it is not sufficient, given that changes in dimension $k$ and sparsity parameter $\\\\beta$ can substantially alter the results.\\n \\nThis brings us to the third difference from most other papers, which is providing a data-driven procedure to automatically determine the optimal embedding dimension $k$ and sparsity parameter $\\\\beta$ (See Appendix A) for a matrix with these characteristics. We discuss this in more detail in the response to Reviewer 2.\\n \\nGiven all of the differences above, we believe that the technical contribution is rather more than the use of a standard matrix factorization.\"}",
"{\"title\": \"Manuscript Updated\", \"comment\": \"Following the suggestions and comments raised by reviewers, we have updated the submitted version accordingly.\"}",
"{\"title\": \"Responses to the technical novelty and the overall goal of the project (Part 3)\", \"comment\": \"> Typos, unclear figures, other comments\\n\\nThank you very much for pointing out all the typos in the paper, we will correct them in the modified manuscript. We also fixed Figure 1's labelling and moved the full diagram to Appendix. A trimmed version showing the 10 best predictions is replaced in the main part of the modified manuscript.\\n \\nWith regards to the number object concepts/verbs being in bold, we simply wanted to make the magnitudes of the lists more visible, as much of the related work uses one or two orders of magnitude fewer items in their evaluation or vocabulary sets. We have removed the highlight.\\n \\nWe used the term \\\"loading\\\" by analogy with factor analysis of a matrix dataset. Here, the object embedding would be akin to the \\\"factors\\\", and the verb embedding akin to the \\\"loadings\\\". We have edited the text to read \\\"and V is the verb loading for each of the d dimensions, i.e. the weighting placed on each verb.\\\"\"}",
"{\"title\": \"Responses to the technical novelty and the overall goal of the project (Part 2)\", \"comment\": \"> When describing the various datasets, eg sec. 3.1, some examples would help.\\n\\nWe have edited the paper to include the following details.\\n \\n\\\"Object categories were normed in Amazon Mechanical Turk. The following 27 categories account for most of the objects: food, animal, clothing, tool, drink, vehicle, fruit, vegetable, body part, toy, container, bird, furniture, sports equipment, musical instrument, dessert, part of car, weapon, plant, insect, kitchen tool, office supply, clothing accessory, kitchen appliance, home decor, medical equipment, and electronic device (Miller, 1995). \\\"\\n\\n\\\"The VerbNet categories selected typically had $10-50$ verbs sharing thematic roles and selectional preferences (e.g. fill-9.8, amalgamate-22.2, manner-speaking-37.3, build-26.1, remove-10.1, cooking-45.3, create-26.4, destroy-44, mix-22.1, vehicle-51.4.1, dress-41.1.1). \\\"\\n\\n> Factorisation of the co-occurrence matrix appears just as standard as the other methods compared against, eg in sec. 4.1. So why are the other techniques any more baselines than yours? The method used appears to be entirely standard, so it's unclear what the technical contribution is.\\n\\nWhile we agree that factorisation of a co-occurrence matrix is standard, we depart from this in a number of ways. Given that these ways are what leads to increased performance in the affordance prediction task, as well as interpretability of the embedding, we believe they are relevant and ask you to please consider them in detail.\\n \\nThe first difference from prior work is that we start from a matrix with counts of applications of verbs to nouns, instead of all verb-noun co-occurrence in every sentence of the corpus. This reduces the size of the dataset used in learning an embedding, suggesting that the data are cleaner, the embedding method is more data efficient, or both. \\n \\nThe second difference is the use of a sparse non-negative matrix factorization, as opposed to dense, real-valued factorizations such as SVD. This is necessary for producing embeddings that are interpretable, by virtue of each dimension not being present for most objects, and loading sparsely across verbs. Furthermore, it is not sufficient, given that changes in dimension $k$ and sparsity parameter $\\\\beta$ can substantially alter the results.\\n \\nThis brings us to the third difference from most other papers, which is providing a data-driven procedure to automatically determine the optimal embedding dimension $k$ and sparsity parameter $\\\\beta$ (See Appendix A) for a matrix with these characteristics. This procedure is an adaptation of that in (Kanagal & Sindhwani, 2010), and is particularly well suited for the multiplicative update algorithm used in solving the optimization problem. We are aware of one other effort using a sparse decomposition, albeit with the goal of producing a general purpose embedding (Murphy et al 2012), and without automatic setting of either dimensionality or sparsity; our embedding performs better than theirs for both tasks, and we now include these results as well. In addition, the optimization problem is NP-hard, and the algorithm is only guaranteed to converge to a local minimum. The initial solution used is critical, and we had to investigate different approaches to find one that works well with this method.\\n \\nGiven all of the differences above, we believe that the technical contribution is rather more than the use of a standard matrix factorization. \\n\\n> it's not clear that the representation itself is s-o-t-a; I suspect you mean that you obtained s-o-t-a performance on an existing object-representation dataset.\\n\\nOur apologies for this not being clear. The model for mental representations of objects that we use -- SPoSE -- was recently published in Nature Human Behaviour (12 October 2020). It is in that sense that we deemed it state-of-the-art. Although it has been available by request since 2019, it does not appear to have been used as a prediction target in any other paper, as far as we can tell.\\n\\n> Gibson (2014) should be Gibson (1979)\\n\\nThank you for pointing out the mistake, it was introduced by the reference manager that we used. We will correct it in the revised manuscript.\\n\\n> Unclear wording: \\\"We will refer to objects and the nouns naming them interchangeably.\\\"\\n\\nWe have rephrased this to be \\\"As we are not doing sense disambiguation for each noun that names an object, we will use noun or object interchangeably throughout the paper.\\\" As there are only 27 homonyms in 1854 nouns naming objects, this does not visibly affect the embedding. As detailed in a comment to reviewer AnonReviewer4, we are in the process of adding a sense disambiguation step to transform nouns/verbs in the corpus into WordNet synsets, and we plan to study the effect of doing so.\"}",
"{\"title\": \"Responses to the technical novelty and the overall goal of the project (Part 1)\", \"comment\": \"We thank the reviewer for their thoughtful comments. We will answer all questions here, anddescribe the edits that will be in the revised paper.\\n\\n> It\\u2019s not clear what the overall goal of the work is. (...) The further minor problem isthat the sub-field of acquiring selectional preferences in computational linguistics looks to besolving the same problem as what you have here.\\n\\nWe very much appreciate the pointers into this literature, of which we were only tangentially aware via references in (Chao et al 2015). We will answer both questions together, if we may, as the answers are related.\\n \\nOur chief goal is indeed to do cognitive science and, specifically, understand the degree to which mental representations of objects are driven by what can be done to them. The cognitive model in this case is an embedding space for objects, where each dimension scores verbs by the degree that they are similarly applicable to objects (a broadly construed notion of \\\"affordance\\\"). We think this is a reasonable choice in itself, as a sparse factorial model is easy to interpret. However, it is also practical in two different ways. The first practical reason is that the embedding can be produced via a factorization of a matrix derived from a corpus, with additional desirable properties; we will discuss this while answering one of your other questions re how this differs from a standard factorization. The second practical reason is that it can be evaluated quantitatively in two tasks that are relevant for our goal: predicting a general purpose mental model, and predicting affordance judgements. Finally, having a dimensional model for objects makes it easier to design stimuli for future experiments involving object interactions and planning, a topic of interest to our collaborators.\\n \\nAs you point out, identifying verbs that tend to be applied to the same nouns is a form of selectional preference. Our work is similar to (Erk 2007), (Pad\\u00f3 et al 2007), (\\u00d3 S\\u00e9aghdha 2010), (Van De Cruys 2014), and (Zhang et al 2020) in basing that identification on co-occurrence statistics in a corpus, rather than a structured resource such as WordNet (as in Resnik 1993) or labelled data (as in Bergsma 2008). The first of those five papers uses similarities of co-occurrence patterns for words to compute selectional preferences for semantic roles in FrameNet. The second uses the same similarity function to predict the plausibility of verb/relation/argument triples. Either of these could easy replace the similarity function with a similarity between embedding vectors derived from a corpus. The third paper is more similar to our approach, in that a regular topic model learns a weighing over a dictionary of elements for each latent variable (topic) in the model; it is more complex in various ways, e.g. it uses separate dictionaries for verbs and nouns, and each observation in the corpus is generated by two latent variables (instead of the single one in a regular topic model). The fourth paper trains a neural network to predict preference scores for combinations of verbs or objects, represented via embedding vectors. The fifth paper learns embeddings for individual words together with modifications for when the word is used in a certain relation. It scores combinations of words by similarity of the modified embedding vectors. The methods on these papers could be used to make the same predictions we are making in the affordance ranking task, where implementations are publicly available or feasible. The human labelled datasets we use are larger than those in the original evaluations, so this would be an interesting comparison. This said, making that prediction is not our main goal, as we discussed above, but rather a way of gauging whether our model is capturing the right information. Our proposed embedding space is a latent variable model for verb-noun applications. While this is also the case for the dual topic model in (\\u00d3 S\\u00e9aghdha 2010), the internal representation in (Van De Cruys 2014), or the embeddings of nouns/verbs in (Zhang et al 2020), they would all require extensive modification to add sparsity assumptions -- important for interpretability -- and to produce combined verb rankings for embedding vectors. Doing this comparison is beyond the scope of this paper.\\n \\nWe have edited the related work section to discuss these papers, covering broadly the same points we are making above\"}",
"{\"title\": \"Addressing the concerns on unclear applicability\", \"comment\": \"We thank the reviewer for their thoughtful comments.\\n\\n> What is the significance of the proposed method, beyond its ability to predict a different set of representations?\\n\\nOur aim is to understand how much of the mental representation of objects is driven by what can be done with/to those objects. Most of the studies looking at affordances of objects consider a much smaller universe of verbs than we do, often because they learn a predictor on labelled object/action pairs. Assembling a dataset like this would not be feasible, given the size of the verb/object lists we consider. We are interested in seeing how many of the verbs in a typical vocabulary are actually used with objects and, further, how they group together based on that usage (rather than existing classification schemes such as those in VerbNet or WordNet). The first contribution is showing that one can do this via an embedding based on dependency relations and additional constraints, where each dimension loads on related verbs much more than on all others. The task of ranking of verbs by how well they applied to objects was meant as a test of the quality of the embedding, rather than an end in itself. In doing this, we show that it is not necessary to have a labelled dataset to achieve this goal.\\n\\nThe practical applicability stems from the second contribution, i.e. showing that the SPoSE representation can be predicted from our embedding. If we take SPoSE to be a good proxy for mental representations of objects, each SPoSE dimension that can be predicted well from our embedding can, therefore, be explained in terms of typical actions applied to the objects that have it. This carries over to the predictions of human subject judgments made from SPoSE, e.g. typicality, semantic features. A different kind of application, beyond SPoSE predictions, would be to use the embedding to define a stimulus space in experiments about object interaction (e.g. which dimensions are object specific, versus which define continua along which objects lie). This was one of our original motivations, but is not covered in this paper.\\n\\n> I would have liked to see some form of comparison between image- and text-based methods.\\n\\nWe agree with the reviewer that this would be desirable, as several SPoSE dimensions are visual in nature (e.g. \\\"textured\\\", \\\"round\\\", or \\\"colorful pattern\\\"), while possibly also having semantic content. It would be feasible to predict them from the internal representations of deep neural networks (e.g. VGG-S or AlexNet), taking as input images of the objects in our list . We decided not do it in this paper, given limitations of space and desire to focus on actions rather than other types of information present in SPoSE. Several of the related papers we cite focus on predicting object affordances from visual features extracted from images. Given that they typically use a limited range of verbs/objects, and this prediction is not our primary goal, we opted to compare our embedding only with those derived from text.\"}",
"{\"title\": \"Addressing the concerns and elaborations on the project goal (Part 2)\", \"comment\": \"> It is interesting that the object/verbs mined from datasets have the number of verbs nearly twice those of the objects...\\n\\nWe thank the reviewer for this suggestion. We could certainly aggregate together verbs which are specializations of an action (e.g. push/shove/nudge) or, verb synonyms (given reliable word sense labelling). This is an experiment we will try once we have evaluated our sense labelling pipeline. \\n \\nIf we are considering grouping verbs after the embedding was learned, we ran a separate experiment looking at the top 50 ranked verbs associated with each well-predicted SPoSE dimension. These verbs fall naturally into $5-8$ VerbNet classes (out of $200+$ possible ones), which group together verbs based on their syntactic form and the arguments informing their semantics. This suggests that those SPoSE dimensions can be explained by very few modes of interaction (if one interprets a VerbNet class as such). However, due to page limitations, we did not report this results in our submitted version.\\n\\n> Can this approach help downstream tasks e.g., not just an object-verb ranking task, but a more general task?\\n\\nOur goal is to do cognitive science research on the mental representation of objects and, specifically, how much of that representation can be accounted for by what can be done with/to those objects. The task of ranking of verbs by how well they applied to objects was meant as a test of the quality of the embedding. More specifically, we wanted to see whether using dependency parse information would lead to better verb rankings than using a general-purpose embedding based solely on word co-occurrence. This was not a given, as datasets of the former are much sparser than the latter, for the same corpus size.\\n\\nWe take SPoSE to be a a good proxy for mental representations of objects, because it can be used to make predictions of human subject judgments about those objects (of typicality, semantic features, etc). Being able to predict SPoSE means that our embedding can be used to make the same predictions; that is definitely a direction we would go in a longer study, but did not have room for here. A different kind of application would be to use the embedding to define a stimulus space in experiments about object interaction (e.g. which dimensions are object specific, versus which define continua along which objects lie).\"}",
"{\"title\": \"Addressing the concerns and elaborations on the project goal\", \"comment\": \"Thank you very much for your valuable suggestions and comments. Below we will address them in details.\\n> Missing references: https://roboticsconference.org/program/papers/80/\\n\\nThank you for pointing us to the paper. We will add the reference with discussion in the Related Work section, together with other work focused on identifying visual features related to object affordances. Our goal is rather different, in that we treat object affordance prediction mainly as a test of whether our embedding contains relevant information, rather than an end in itself. Furthermore, we consider thousands of verbs, rather than 50.\\n\\n\\n> The approach of attributing/grouping together verbs for specific dimensions seems less intuitive ...\\n\\nOur goal was to make the dimensions be interpretable to cognitive scientists, in addition to being informative. Loadings over verbs allow for ranking over the entire verb vocabulary, and our groups of 10 (an arbitrary cutoff) are merely for illustration of what the dimension might correspond to. We agree that novel verbs or objects would not be accounted for, in principle. In practice, almost all the common verbs or objects in our list are present in this corpus, and we plan on expanding to corpora large enough (e.g. Common Crawl) that even very rare ones would be amply represented.\\n \\nWhile we considered using an supervised approach to learning an embedding based on an objective function, we decided against it given the difficulty of obtaining labelled data. As we consider thousands of objects and verbs, extracting judgements for all combinations would be prohibitively expensive, or require additional heuristics for deciding how to sample the space.\\n\\n> How reliable is the processing of the corpora? ...\\n\\nThis was done in an ad-hoc fashion, by sampling about a hundred sentences. Our qualitative impression is that we are are extracting less information than we could, as the semantic parses are more likely to miss an application of a verb to a noun than to deem it present by mistake. We also rely on the lemmatization in Stanza, given that our verb lists operate on the infinitives. The other possible issue is semantic ambiguity (e.g. \\\"grab the bat\\\") caused by the presence of object homonyms. That said, only 27 objects out of 1854 are such that they have a homonym (e.g. \\\"bat\\\":animal and \\\"bat\\\":object are two of those 27). We are in the process of implementing word-sense disambiguation using the Lesk algorithm, so we can have object and verb lists defined in terms of WordNet synsets.\\n\\n> The correlation results don\\u2019t have statistical significance tests/metrics and that would be helpful to see.\\n\\nWe have added the $p$-values for the correlation values with SPoSE (testing against a null hypothesis of 0 correlation) in the modified version of the manuscript. The $p$-values range from 0.0 to the maximum of 7.61$\\\\mathrm{e}{-7}$ for the SPoSE dimension ``furry'', indicating the statistical significance of correlation with our regression outcomes.\\n\\n> It would be interesting to see/discuss if the bigrams cause any change in performance.\\n\\nWe started this project using unigrams, and that would force us to drop 324 objects in our list. Given that we have an interest in predicting SPoSE dimensions, this was ultimately unacceptable. Subjectively, we think that having the additional objects named with bigrams may have improved the quality of the affordance dimension verb rankings. This could be because there was $>10\\\\%$ more data, or because many spurious co-occurrences were removed (e.g. \\\"ice_cream melts\\\" vs. \\\"cream melts\\\").\\n\\n> it would be interesting to see if subword embeddings, such as byte pair encoding, could be incorporated here...\\n\\nOur approach could easily extend to the sub-word scenario, which would be applied on top of the Stanza output as suggested. The co-occurrence matrix would be enlarged, with each row and column corresponding to a sub-word from object list and verb list respectively. However, we believe that a sub-word embedding may not be optimal in this specific task, since some rows or columns of the co-occurrence matrix will be very sparse and therefore lead to rare co-occurrence events. This would make the matrix factorization (or viewing it as a denoising procedure) challenging, as discussed in (Turney & Pantel, 2010).\"}",
"{\"title\": \"An interesting problem, but limited novelty (and missed similar, related work) for a focused problem. However, qualitative and quantitative results look good and if this could be proven useful for more than just the verb-object prediction task evaluated in the paper, this could be a good contribution!\", \"review\": \"Summary:\\n\\nThis paper attempts to learn embeddings for objects based on their affordances i.e., verbs that could be applied to them to realise their meaning. Here each dimension corresponds to an affordance or an aspect of meaning shared by actions, thus allowing a correspondence between nouns (objects) and verbs (their affordances) based on co-occurrences in text corpora in which they exist. Empirical results show that these embeddings allowing prediction of a \\u201cmental representation\\u201d of objects (i.e., in comparison to human-given annotations of dimension \\u201csemantics\\u201d in embeddings) and a qualitative analysis attempts to show how interpretable the objects are.\", \"reason_for_score\": \"This paper approaches an interesting problem, but is not well-placed in literature and has missed previous work that attempts to do almost the same thing; however from a different angle. I thought the question and problem was interesting enough, but given the missed references and existing embedding-learning work, there is limited novelty in this approach. However, I think the results are interesting enough and the authors did a really nice job of qualitatively analysing the representations (and additionally, it would be good to see further discussion of use-cases of this e.g., instead of just a verb-ranking task), so my score is fairly positive overall. \\n\\nPositive points + questions:\\n\\n1. This paper is well written and the methodology is clearly explained. \\n\\n2. The empirical results show that verb rankings obtained by these embeddings outperform all previous embeddings (however those were not learned with this objective in mind, but just tested on the verb-ranking task). \\n\\n3. The results on the SPoSE task (predicting which dimensions correspond to which affordances) shows that the correlations between true and predicted dimensions are high.\\n\\n4. The qualitative figures and examples are very insightful and highlight te promise of this object-verb objective to learn embeddings that highlight affordances and useful semantic properties of objects. \\n\\n5. On that note, it would be really interesting to see if this helps downstream tasks e.g., not just an object-verb ranking task, but a more general task (however one that does require reasoning about objects and verbs together in order to correctly solve some decision making/classification problem)\\n\\n\\nNegative points + questions:\\n\\n1. Missing references: https://roboticsconference.org/program/papers/80/\\n\\n2. The approach of attributing/grouping together verbs for specific dimensions seems less intuitive (and more restrained) than just allowing the embedding space to be learned by trying to make verb-object embedding representations more similar based on objective optimised for similarity/distance between embeddings. It is also further restrained given that novel verbs (unseen during training) may not be accounted for.\\n\\n3. How reliable is the processing of the corpora (e.g., tokenising, bigrams, using Stanza, dependency parsing); were these sanity-checked to assess if they were correct and the amount of noise that exists? Previous work that has attempted to do this from e.g., CommonCrawl/Wikipedia data found a range of errors/inconsistencies because of the domain shift and differences in the style/type of language commonly found on the internet, thus resulting in object-verb pairs that were too noisy for predicting/training purposes. It would be helpful to see examples + a manually annotated portion for correctness.\\n\\n4. The correlation results don\\u2019t have statistical significance tests/metrics and that would be helpful to see.\\n\\n5. It would be interesting to see/discuss if the bigrams cause any change in performance (e.g., maybe only allowing unigrams changes the distribution of words and therefore object, verb pairs that might affect results slightly).\\n\\n6. More importantly, given how prevalent subword embeddings are now, it would be interesting to see if that could be incorporated here---for e.g., if a byte pair encoding algorithm was first run over the corpus and nouns/verbs were split according to those, does this still hold? This seems important given that subword embeddings are now used (and perform better) for most of the best performing models, so if a method that allows better object-verb disambiguation could be used to kickstart tasks that require subword embeddings, that would be helpful to see! This seems very doable within this framework with a few minor changes..\", \"additional_minor_comments\": \"1. It is interesting that the object/verbs mined from datasets have the number of verbs nearly twice those of the objects (previous work/datasets seem to have a smaller number of verbs given the overlap of similar verbs for the same object). This begs the question of whether or not similar verbs can be collapsed into one another/and also a qualitative analysis of whether these are grouped together and can be predicted alongside.\\n\\n2. It would be good to see statistical significance tests for metrics.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Clear exposition of a method with unclear applicability\", \"review\": \"Summary: The authors develop object representations based on the concept of affordances, making use of a dependency parsed text corpus and a factorization of the PPMI matrix of noun-verb pairs. They show the proposed approach is able to predict human judgements of object affordances better than distributional methods and LSA. They further show that the novel representations correlate well to a set of interpretable representations that were obtained via human judgements of object similarity.\", \"main_contributions\": \"1. A method of learning an embedding of objects from an unannotated text corpus which is infused with a degree of knowledge of object affordance.\\n2. An analysis of the relationship between the proposed embeddings and SPoSE embeddings.\", \"strengths\": \"1. The proposed approach is simple and does not require any form of complex annotation. This enables the consideration of a large set of verbs compared to approaches which are based on manually created datasets.\\n2. The analysis of interpretability is well thought out. The proposed embeddings display high predictive abilities for a majority of SPoSE dimensions, suggesting that this method might offer a good model of the mental representation of objects.\", \"weaknesses\": \"1. One crucial aspect which I feel the authors neglected to adequately address is the aim of the work, and its practical applicability. What is the significance of the proposed method, beyond its ability to predict a different set of representations? If these representations are meant to achieve a new state of the art, the evaluation is too limited and fails to include common methods in the literature, such as contextual embeddings. [This has since been addressed]\\n2. Given the existence of methods that make use of visual features to predict affordances, I would have liked to see some form of comparison between image- and text-based methods.\\n\\n## Response to comments\\n\\nI thank the authors for their comments and their revision, which have clarified the aim of this work. Having read the authors' responses to this and other reviews, I realize that in my initial assessment I had misjudged the nature of the manuscript. After careful consideration, I have therefore increased my rating.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting motivation, but conceptually unclear and methodologically flawed\", \"review\": \"The authors design a distributional word embedding method inspired by Gibsonian theories of perception. They use matrix factorization techniques to derive low-rank object representations in what they call an \\\"affordance space,\\\" linking each object to aspects of meaning shared among different types of physical actions. They argue that the learned representations are interpretable, and that this affordance space \\\"underlies the mental representation of objects.\\\"\\n\\nI unfortunately found the paper both conceptually and methodologically flawed. These criticisms fall mainly under the \\\"Quality\\\" and \\\"Significance\\\" categories, expanded below. First, a summary in pros/cons:\", \"pros\": \"Designs cognitively-motivated knowledge representations; leverages a diverse set of experiments to better understand and defend these representations.\", \"cons\": \"Conceptual flaws about the content of the derived representations; evaluations are insufficient to support the claims of the paper.\\n\\nQuality\\n\\nThis paper suffers from both conceptual and methodological issues.\\n\\n1. The claimed \\\"affordance space\\\" is not falsifiably \\\\*about\\\\* affordances in any deep sense. While the original data matrix linking words and their associated attested verb combinations clearly gets at possible event--object interactions, the factorized affordance space doesn't necessarily have this property. The lower-dimensional basis may span the space according to \\\"modes of interaction\\\" as claimed, but equally likely may describe coherent categories of contexts/places in which the actions occur, or categories of agents which perform the action, for example.I actually see three facts reported in the paper that make me think the derived data isn't about affordances per se. First, figure 2b actually shows that some of the dimensions of the affordance space best correlated with SPoSE dimensions are object-taxonomic properties. Second, the evaluation based on the raw affordance matrix (called \\\"PPMI\\\" in Table 1) underperforms the full model by a substantial amount, suggesting that the factorization introduces information not captured in the actual affordance data. Third and possibly most importantly, table 2 confirms that \\\"structural\\\" and \\\"appearance\\\" features are some of the best predicted features from the affordance space.The authors may argue that the set of English verbs used in the raw matrix are not the right basis for affordance knowledge, and that the factorization leads to a better abstract/conceptual affordance knowledge representation less tied to linguistic productions. But this claim about the content of the factorized representation needs to be articulated and substantiated with tests of alternative hypotheses.As a quick analogy in case my point isn't clear: you might learn word embeddings on a Wikipedia dump by factorizing a matrix of word--Wikipedia topic co-occurrence counts. The resulting low-dimensional representations aren't \\\\*about Wikipedia topics\\\\* in any deep sense, no matter the factorization method --- we simply talk about them as distributional meaning representations.\\n2. Regarding the methodology of evaluation 2: is your aim to demonstrate the necessity and sufficiency of affordance knowledge for object feature knowledge? The evaluation demonstrates a rough sort of sufficiency, but not necessity. Demonstrating necessity would require testing alternate representations, I think, which isn't reported. What do you think about this premise/issue?\\n3. Evaluation 1 doesn't seem very meaningful to me. It seems self-evident that representations constructed on the basis of verb--object co-occurrence data will perform well in predicting object--action co-occurrences, and probably better than representations which are not specifically tuned exclusively for that language task. (I agree that it's nontrivial that a corpus-derived representation would suffice here, but I don't find it interesting that it outperforms other more general / less task-specific corpus-derived representations.) \\n\\nSignificance\\n\\nThe aim of this paper is not clear to me. It cannot argue for a superior system of word representation, since it does not evaluate these representations on any broad evaluation tests. It also doesn't make a convincing cognitive argument about the content of mental representations, given the conceptual and methodological issues in evaluation 2, discussed above. (A convincing cognitive argument would also need to draw on data from human behavior beyond the sort gathered on AMT, or possibly neural evidence; see Mitchell et al. (2008) as an example.) \\n\\nOriginality\\n\\nBecause I haven't closely followed the relevant literature, I can't speak to the originality of the embedding method. That being said, it doesn't seem like a substantial conceptual innovation to me. I would be more motivated to let this slide if the paper were stronger on the experimental / analytic side. \\n\\nClarity\\n\\nThe paper is clearly written and the authors provide plenty of supporting supplemental information. Some minor comments on this front:\\n\\n* Figure 1 is not very useful, either for assessing success of the method or for understanding its shortcomings. For the latter purpose, maybe consider showing the \\\\*residuals\\\\* of the regression, so we can understand where affordance information performs relatively better/worse across SPoSE dimensions?\\n* Gibson (2014) citation should be Gibson (1979). \\n\\nMitchell, T. M., Shinkareva, S. V., Carlson, A., Chang, K.-M., Malave, V. L., Mason, R. A., & Just, M. A. (2008). Predicting human brain activity associated with the meanings of nouns. science, 320(5880), 1191--1195\\\\.\\n\\n## Post-rebuttal response\\n\\nI have read the other reviews and the authors' extremely thorough responses \\u2014 much appreciated! See the thread below for some brief responses to the rebuttal sections in turn.\\nI regret posing far too high a standard in my original review. The authors' rebuttals have helped to quiet my doubts a bit, and better understand the utility of this paper as a product for cognitive science. I have accordingly revised my judgment quite a bit upward.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review for UNDERSTANDING MENTAL REPRESENTATIONS OF OBJECTS THROUGH VERBS APPLIED TO THEM\", \"review\": \"General comments\\n---\\n\\nThis paper uses a factorisation of a verb-object co-occurrence count matrix to predict which verbs are applicable to which objects. This idea is related to the classical notion of an \\\"affordance\\\" from Gibson. The method is evaluated on a number of recent affordance datasets, obtaining better performance than a number of baseline systems.\\n\\nThe main problem I have with the paper as it stands is that it's not clear what the overall goal of the work is. A minor problem is that the method used appears to be entirely standard, so it's unclear what the technical contribution is. A further minor problem is that there is a whole related sub-field of computational linguistics which has been investigating a similar problem for decades which is ignored in the discussion.\", \"the_main_problem\": \"is the goal to develop a psychologically plausible cognitive model? Are we doing cogsci here? Or is it to build a knowledge base that can be used by an AI system (so more on the engineering side)? But if the latter, how would the knowledge be used, and by what sort of AI system? Is the knowledge to be used by a text-based system (if so, how) or by a situated agent interacting with an environment (in which case it needs explaining how the knowledge could be grounded in the agent's environment)?\\n\\nThe minor problem is that factorisation of the co-occurrence matrix appears just as standard as the other methods compared against, eg in sec. 4.1. So why are the other techniques any more baselines than yours?\\n\\nThe further minor problem is that the sub-field of acquiring selectional preferences in computational linguistics looks to be solving the same problem as what you have here. Classic references are Wilks from the 1970s and Resnik from the 1990s. \\n\\nMore specifically, there's a lot of existing work on taking a set of verb-object pairs and clustering the data in some way. This paper from 2010 is a good one to look at, and has lots of relevant references:\\n\\nLatent variable models of selectional preference\\nDiarmuid O Seaghdha\\nACL 2010\\n\\nMore specific comments\\n--\\n\\nwe show that the dimensions can be used to predict a state-of-the-art\\nmental representation of objects - it's not clear that the\\nrepresentation itself is s-o-t-a; I suspect you mean that you obtained\\ns-o-t-a performance on an existing object-representation dataset.\\n\\n\\\"Gibson (2014) coined the term \\u201caffordance\\u201d to describe what the\\nenvironment\\\" - the term was coined by Gibson was much earlier, 1979?\\n\\n\\\"We will refer to objects and the nouns naming them interchangeably.\\\" -\", \"not_sure_what_you_mean_here\": \"is it that you either say \\\"someone can\\nstroke a cat\\\" or the verb \\\"stroke\\\" can apply to the noun \\\"cat\\\"? what's\\nthe significance of the difference?\\n\\nwhen describing the various datasets, eg sec. 3.1, some examples would\\nhelp.\\n\\nTypos etc.\\n--\\n\\nThe labels in Figure 1 are too small to read.\\n\\nto a particular \\u201dmode of interaction\\u201d - left quotes\\n\\n defined as \\u201daffordance mining\\u201d - left quotes\\n\\nIn this paper, we use the list of 1854 object concepts - not sure why\\nthe number is in bold.\\n\\n The resulting list has 2541 verbs - not sure why\\nthe number is in bold.\\n\\nV is the verb loading for each of the d dimensions - \\\"loading\\\" is an\\nodd term to use here, maybe \\\"weighting\\\"?\\n\\n 2)purpose\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
chPj_I5KMHG | Grounding Language to Autonomously-Acquired Skills via Goal Generation | [
"Ahmed Akakzia",
"Cédric Colas",
"Pierre-Yves Oudeyer",
"Mohamed CHETOUANI",
"Olivier Sigaud"
] | We are interested in the autonomous acquisition of repertoires of skills. Language-conditioned reinforcement learning (LC-RL) approaches are great tools in this quest, as they allow to express abstract goals as sets of constraints on the states. However, most LC-RL agents are not autonomous and cannot learn without external instructions and feedback. Besides, their direct language condition cannot account for the goal-directed behavior of pre-verbal infants and strongly limits the expression of behavioral diversity for a given language input. To resolve these issues, we propose a new conceptual approach to language-conditioned RL: the Language-Goal-Behavior architecture (LGB). LGB decouples skill learning and language grounding via an intermediate semantic representation of the world. To showcase the properties of LGB, we present a specific implementation called DECSTR. DECSTR is an intrinsically motivated learning agent endowed with an innate semantic representation describing spatial relations between physical objects. In a first stage G -> B, it freely explores its environment and targets self-generated semantic configurations. In a second stage (L -> G), it trains a language-conditioned goal generator to generate semantic goals that match the constraints expressed in language-based inputs. We showcase the additional properties of LGB w.r.t. both an end-to-end LC-RL approach and a similar approach leveraging non-semantic, continuous intermediate representations. Intermediate semantic representations help satisfy language commands in a diversity of ways, enable strategy switching after a failure and facilitate language grounding. | [
"Deep reinforcement learning",
"intrinsic motivations",
"symbolic representations",
"autonomous learning"
] | Accept (Poster) | https://openreview.net/pdf?id=chPj_I5KMHG | https://openreview.net/forum?id=chPj_I5KMHG | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"fe5ZowCsSfv",
"tnm_XVDwBco",
"zCEvz5G4MJ",
"1_Kmsq697EX",
"ZxHB-_MK6fG",
"5-oJ_oQzm6",
"Jrh55dtn9yx",
"rFmSe-JpDnn",
"W_w7oWGWRUF",
"5Dp9pvfSZc3",
"ntrpXGPIzUp",
"v-rhY06auRT",
"fiI-UyyaJM",
"P0_vwQ5axP2",
"v9rKISMsML",
"4rmRMNBJjDf",
"y5zTu5ve6X",
"Y4HrV3QfDJY"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040394777,
1606300371798,
1606269683709,
1606246161900,
1606244978550,
1606127836991,
1606085348065,
1605802253514,
1605802176907,
1605802105051,
1605802002054,
1605801933302,
1605801869601,
1605801738674,
1603983667871,
1603961837580,
1603752003284,
1603149682781
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3493/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3493/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3493/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3493/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3493/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3493/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3493/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3493/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3493/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3493/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3493/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3493/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3493/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3493/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3493/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3493/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3493/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Poster)\", \"comment\": \"This paper presents a new approach to grounding language-based RL tasks via an intermediate semantic representation, in an architecture called language-goal-behavior (LGB). The architecture permits learning a mapping from internal goals to behavior (GB) separately from learning a mapping from language to internal goals (LG), and prior to flexibly combining all three (LGB). The architecture is studied in a specific implementation called DECSTR. The architecture has multiple desired attributes including support for intrinsic motivation, decoupling skill acquisition from language grounding, and strategy switching. The experiments demonstrate the utility of different components in the architecture with a variety of ablation results.\\n\\nThe reviews initially found the paper to be poorly organized with required content described only in the appendix (R1, R2, R4), with unclear main contributions (R1, R2, R4), and with results restricted to demonstrations (R3). Despite these reservations, the reviewers found the content to be potentially relevant though narrow in scope.\\n\\nThe authors substantially revised the paper. They improved its organization, clarified contributions, separated the architecture from the specific examples, and improved the experimental baselines. After reading the revised paper, the reviewers agreed that the paper's organization and insights were improved, making the new paper's contribution and insight clear. The experimental baselines were also improved, providing more support for the potential utility of the proposed method.\\n\\nThree reviewers indicate to accept this paper for its contribution of a novel approach to grounding language and behavior with an intermediate semantic representation. No substantial concerns were raised on the content of the revised paper. The paper is therefore accepted.\"}",
"{\"title\": \"Answer to Reviewer 4\", \"comment\": \"Thank you for this quick answer.\\n\\n**About LGB-C**\\n\\nLGB-C indeed samples random targets, which forces a minimal distance for blocks to travel. This is the way it is implemented in Li et al., 2019 and Lanier et al., 2019, probably because it increases diversity in training trajectories. We agree that we could in principle add another module to sample closer targets at test time. DECSTR however, does it spontaneously as it learns to reach semantic goals and does not require an extra module. We are rerunning the analysis with the extra module and will report the results in the camera-ready version (and here if time allows).\\n\\n\\n\\n**About LGG module for continuous targets**\\n\\nIt is true that the dataset contains a good diversity of continuous targets. We use the same dataset to generate semantic configurations or continuous targets, but it is much more diverse for the continuous targets. Indeed, considering semantic configurations leads to many repeats in the dataset, while continuous targets are always new.\\n\\nIt seems the C-VAE has trouble integrating the language condition to differentiate between different target distributions and mostly predicts a low-diversity of average targets resulting in a semantic configuration where all blocks are close together. We did try to add more capacity to the network (up to three layers of 256), and investigated other loss functions (soft BCE or the continuous Bernoulli from https://arxiv.org/abs/1907.06845 , both with normalized targets in [0, 1]), but this did not help. Further investigations of this issue might help improve the language-conditioned goal generation. However, leveraging internal semantic representations will always be easier, as there is a more direct mapping between language and this semantic representation. It might seem ad-hoc at first, but language evolved as a way to express these internal semantic representations to other humans. Furthermore, the discrete aspect of semantic representation helps generalize to logical combination, which cannot be done directly with continuous goal generation.\"}",
"{\"title\": \"Thank you for the clarifications and update. Some additional comments and questions.\", \"comment\": \"Thank you for your response and the updated draft. The use of HER in the baselines is much clearer now, and I appreciate the new language conditioned RL (LC-RL for L->B) baseline in Section 4.2 which answers the question about the need for intermediate baselines, and the LGB-C baseline which answers the question about the need for semantic intermediate representation compared to continuous representation.\\n\\nNote here that the continuous goal representation in LGB-C is the (randomly sampled) specific 3D coordinates of the blocks that satisfy the semantic configuration (i.e. it is one of many possible specific configurations that satisfies the semantic configuration). For the G->B phase, my intuition here is that since the specific configuration is sampled randomly for the semantic configuration, it makes sense that the LGB-C version ends up moving the blocks more / takes longer since we are moving it to any one of the valid specific configurations, rather than inferring the \\u2018closest\\u2019 valid configuration. Am I understanding this correctly?\\n\\nOn the L->G part, do you have some more intuition on why there is low diversity/recall? I wonder if it has to do with the capacity of C-VAE used in LGG? I am presuming that the interaction data used to train LGG C-VAE has enough diversity (many configurations for the semantic), but sampling from C-VAE has low diversity in the continuous goal configuration here. I am curious to see what the authors think. \\n\\nHopefully, there is enough time for the authors to respond. Given the major changes which improved the paper clarity and methodology, I am willing to consider increasing my score to a weak acceptance (still contemplating).\"}",
"{\"title\": \"Reply\", \"comment\": \"Thanks for these clarifications and those in the general response. I like the new framing of the paper and I think the contribution is much more clearly scoped.\"}",
"{\"title\": \"Final rebuttal revision\", \"comment\": \"One more time, we thank the reviewers for their efforts and constructive feedback.\\n\\nWe integrated the last results of the language-conditioned baseline to the paper.\\n\\nThe reviews helped us improve the positioning of the paper. The new version now clearly presents the targeted problem and states our contributions towards its resolution. We undertook a major rewriting of the paper to better convey these ideas, ran complementary experiments (the language-conditioned baseline) and improved our position baseline.\\n\\nWe hope the reviewers will find time to go through the revised version.\"}",
"{\"title\": \"Revised version nearly complete\", \"comment\": \"The revised paper is nearly complete. After taking into consideration the reviewers' comments, we updated ~80% of the previous version.\\n\\nWe will have the last results of the Language baseline by the end of tomorrow and will update the two corresponding paragraphs (in Section 4.2) accordingly. Given the current progress of the runs, we are expecting it to learn all the close and far goals but not to succeed in the stacking goals. In the mastered goals, we are also expecting a reduced diversity compared to DECSTR.\\n\\nThe rest of the paper is definitive and will not be updated before tomorrow's deadline.\"}",
"{\"title\": \"Revised paper incomplete\", \"comment\": \"Will the complete version of the revised paper be ready before the deadline for discussion? The new framing seems promising, but the paper still has \\\"coming soon\\\" notes and a lot of stray-looking paragraph headings breaking up the text.\"}",
"{\"title\": \"Answer to Reviewer 3\", \"comment\": \"We thank Reviewer 3 for their helpful feedback. Here, we answer comments that were specific to Reviewer 3. Concerns shared with other reviewers are addressed in the general answer.\\n\\n**Application to other domains**\\n\\nWe partly answer this point in the main answer. The definition of semantic representations is domain-specific just like the definition of goal spaces and reward functions in traditional goal-conditioned RL. Instead of defining the set of tasks, we define the set of possible behaviors by defining the dimensions of that space. In the main answer, we argue that it is an easier task that involves less prior knowledge.\\n\\nFrom a developmental point of view, such sensors could be innate or acquired very early by infants, as it is the case for spatial predicates (Mandler, 2012). In future work, we would like to investigate how to learn these predicates from social interactions. The question of learning semantic predicates that can be used across a large variety of domains is also very interesting.\\n\\nHowever, the main contribution of this paper is to define and demonstrate the benefits of the decoupled LGB architecture over standard language-conditioned RL approaches. In this demonstration, DECSTR provides an illustration of the three properties emerging from LGB architectures (see main answer). Thus, the design/learning of semantic representation that generalize across domains and allow to represent a diversity of interesting behaviors is orthogonal to the main contribution of the paper.\\n\\n**About the use of more complicated predicates** \\n\\n\\\"in principle ... could use any other combination of binary predicates and could be extended to use n-ary predicates\\\": by this sentence, we mean that semantic representations can be composed of any combinations of n-ary predicates that we can think of. Of course, more complicated semantic representations also involve more complicated learning architectures to handle them. We clarified this sentence in the new version.\\n\\nAbout the \\u201cabove\\u201d inductive bias: we can argue that infants who have an innate sensor for the \\u201cabove\\u201d relation might also have the innate implicit knowledge of the symmetry of that relation.\", \"about_other_predicates\": \"the sentence \\u201cuse the green block as the base\\u201d could be seen as describing many binary \\u201cabove\\u201d relations where the green block is always the block below. \\u201cTop-most\\u201d could also use a binary predicate \\u201cgenerally above\\u201d that does not require horizontal alignment, it would then refer to the block that is never the block below in all the existing \\u201cgenerally above\\u201d relations. Overall, inductive biases are not theoretically required to handle n-ary predicates, but they are practically useful for our current algorithms to work with reasonable sample sizes. Future implementations of the LGB architecture might require the use of Graph Neural Networks to handle relations between several nodes-objects.\\n\\n**About logical combinations of instructions**\\n\\nReviewer 3 is right, our logical combinations are symbolic and are not expressed directly by text. It can be seen as assuming that the agent has an innate knowledge of the OR, AND and NOT logical functions, and can use them to combine any atomic instructions it discovered during its interaction with the tutor. How to translate complicated sentences that express logical combinations into logical trees of basic instructions that the agent could handle is very interesting but out of the scope of this present work.\\n\\n**\\\"a learning architecture that discovers and masters all reachable configurations from a set of relational primitives\\\"** \\n\\nWe acknowledge that this formulation could be misinterpreted and corrected it.\\n\\n**Typos**\\n\\nWe thank Reviewer 3 for pointing minor errors in the text. We corrected them in the new version.\\n\\n**Conclusion**\\n\\nReviewer 3 seems mostly concerned about the generality of our approach and its use in other domains. With our new positioning detailed in the main answer, we argue that this paper presents a general architecture (LGB) and that DECSTR is only a particular implementation of it, to demonstrate its benefits. The design of semantic representations is orthogonal to the approach of this paper but is an interesting topic for future research. Designing semantic representation is defining the space of behaviors that the agent can explore. Although it is simpler than designing space of achievable goals and their associated rewards, it remains domain dependent. Designing or learning general predicates, and designing learning architecture able to handle complicated n-ary predicates is out of the scope of this paper.\"}",
"{\"title\": \"Answer to Reviewer 2\", \"comment\": \"We thank Reviewer 2 for their helpful feedback. Here, we answer comments that were specific to Reviewer 2. Concerns shared with other reviewers are addressed in the general answer.\\n\\n**Comments on contributions and experimental section**\\n\\nReviewer 2 seems to be mostly concerned about the lack of focus of our paper. We thank R2 for their very constructive suggestions on that aspect. These helped us refocus our paper as detailed in the main answer. We believe the organization of the experimental section is drastically improved by the new positioning of the paper. We also moved important information and visualisations back from the appendix to the main document. \\n\\n**Generality of the semantic representation**\\n\\nWe answer this comment in the main answer.\\n\\n**Smaller comments**\\n\\n* DECSTR is now the name of the particular instance of the general LGB architecture we propose. This particular implementation relies on Deep Sets, which can justify the use of the term in the name. However, we removed \\u201cDECSTR\\u201d from the title of the paper, as it is only a secondary contribution, the LGB architecture being the central contribution.\\n* We now explicitly list our contributions in the introduction\\n* The description of inductive biases is indeed secondary given the new focus of the paper on the general LGB architecture. The methods section is updated accordingly.\\n* We thank Reviewer 2 for a comment that pushed to allocate extra resources to the position baseline. We reached higher performance by combining non-binary rewards (as advised by Reviewer 2) and the multi-criteria HER method from Lanier et al., 2019. This helps get the Position baseline closer to state-of-the-art RL approaches for manipulation tasks. \\n\\n**Conclusion**\\n\\nIt seems the main concerns of Reviewer 2 are about the contribution and challenge statements as well as the organization of the experimental section. We hope that our answers and the new version of the paper help resolve them.\"}",
"{\"title\": \"Answer to Reviewer 4\", \"comment\": \"We thank Reviewer 4 for their helpful feedback. Here we answer comments that were specific to Reviewer 4. Concerns shared with other reviewers are addressed in the general answer.\\n\\n**Concerns about the baselines**\\n\\nWe are indeed using HER in the design of our baselines. The baselines are designed so as to reduce to its minimum the number of potential confounding factors. For this reason, we keep most modules strictly equivalent (HER included). The main answer provides further details about the baselines, including the new Language baseline that will replace the older one.\\n\\nThe reviewer is also asking whether we could train the DECSTR agent with Phase 1 and Phase 2 coupled together or in an end-to-end fashion. As long as there is a fixed intermediate representation, the gradients cannot flow backward, which prevents any end-to-end learning. The new end-to-end language-conditioned baseline implements a state-of-the-art language-conditioned RL architecture and performs this coupled learning. Furthermore, the introduction of the language-conditioned goal generator makes the emergence of behavioral diversity possible, as many valid configurations can be generated for a given language input. Other implementations of our architecture could mix the two phases: either run them asynchronously, or make repeated cycles between them, etc. This could allow the tutor to guide its selection of instructions so as to orient the sensorimotor learning of the agent.\\n\\n**Organization of the method section**\\n\\nAs explained in the main answer, we deeply reorganized the method and experimental sections. This was made easier by the reflection on the focus of the paper, as outlined in the main answer. Important details are also moved back from the appendix to the main document to facilitate comprehension.\\nWe fixed backticks, thank you.\\n\\n**Conclusion**\\n\\nIt seems Reviewer 4 is mostly concerned about the design of the baselines. Answering their concern, we declared using HER in our baselines. The main answer details improvements made on both baselines: improving the position baseline and redefining the language baseline to better support our main contribution. We hope the new organizations of the method and experiment section help understanding.\"}",
"{\"title\": \"Answer to Reviewer 1\", \"comment\": \"We thank Reviewer 1 for their helpful feedback. Here, we answer comments that were specific to Reviewer 1. Concerns shared with other reviewers are addressed in the general answer.\\n\\n**The method section is hard to follow**\\n\\nAs detailed in the main answer, the new organization of the method section presents the general LGB architecture, the environment, then follows the three modules of the proposed instantiation of the LGB architecture with DECSTR: \\n* the semantic representation;\\n* the intrinsically-motivated goal-conditioned RL algorithm\\n* the language-conditioned goal generator.\\n\\nThis new organization should be easier to follow. In addition, we moved part of the information from the appendix back to the main paper to facilitate comprehension.\\n\\n**Ablations and baselines**\\n\\nWe call \\u201cbaselines\\u201d variants of our algorithm that implement defining features from related state-of-the-art approaches (language-conditioned RL and continuous goal conditioned RL for manipulation tasks respectively). We call \\u201cablations\\u201d variants of DECSTR that aim at showing the importance of its components (inductive biases, curriculum, etc.). Our main answer and the new version of the paper clarify the purpose and definitions of our two baselines.\\n\\n**The paper seems to be a demonstration of DECSTR**\\n\\nWe agree that the focus of the paper was not clear in the previous version. We hope the new positioning outlined in the main answer helps resolving these concerns. The paper will be about the novel RL architecture LGB. Most of its properties emerge from its design (decoupling, goal generation leading to behavioral diversity and enabling strategy switching). DECSTR is thus a concrete illustration of these properties in a specific setup. We argue that other implementations following the same overall architecture will demonstrate similar properties and benefits over existing approaches.\\n\\n**Conclusion**\\n\\nIt seems the main concerns of Reviewer 1 are about the contribution and challenge statements as well as the organization of the methods and experimental section. We hope that our answers and the new version of the paper help resolve them.\"}",
"{\"title\": \"General answer to all reviewers (3/3)\", \"comment\": \"**On the definition of semantic representations**\\n\\nIn this new positioning, the set of semantic predicates is not a contribution. We do not argue that our proposed representation is general enough to solve all tasks. \\n\\nExtending the set of spatial predicates handled by an LGB architecture is an interesting question for future research. Indeed, infants spend most of their time in what can be considered as manipulation scenarios. We know from Mandler 2012 that infants use spatial predicates really early in life, and she even argues that a small set of them (around 20) enables infants to bootstrap an important set of sensorimotor and cognitive skills. \\n\\nOn the application of LGB architectures to other domains, we argue that the definition of sets of semantic predicates (i.e. binary sensors) is easier and involves less prior knowledge than the definition of goal spaces and associated reward functions it replaces. Indeed, defining the semantic predicates is defining the dimensions of a behavioral space. It does not require the engineer to fully grasp all behaviors in that space, to know which behavior can be achieved and which ones cannot, nor to define reward functions for each of them. The space of potential behaviors becomes combinatorially larger as new semantic predicates are added, the reward function only asserts equality between the current and goal configurations. Agents could also easily grow the space of potential behaviors by adding new semantic predicates across learning.\\n\\n**Conclusion**\\n\\nIt seems the main concerns of the reviewers were about the lack of clear problem and contribution statements and a poor organization of the methods and experiments sections. Thanks to their comments, we believe the new positioning of the paper and its new organization that results from it answer these concerns. We hope reviewers will find the time to read the updated version of the paper, that includes all the points discussed above and presents a clearer organization.\"}",
"{\"title\": \"General answer to all reviewers (2/3)\", \"comment\": [\"**Paper organization (all)**\", \"The method and experimental sections are now reorganized to reflect the new positioning of the paper and information facilitating comprehension are moved back from the appendix to the main document. The new organization is this one:\", \"The method section presents the general LGB architecture, the environment and the implementation of the three modules composing LGB architecture in the DECSTR algorithm: 1) the semantic representation; 2) the intrinsically motivated goal-conditioned RL algorithm and 3) the language-conditioned goal generator.\", \"The experimental section is reorganized in three sections (R1,R2,R3,R4):\", \"S1 shows that DECSTR solves the task\", \"S2 compares DECSTR to a language-conditioned RL approach and show the three properties emerge only in DECSTR\", \"S3 shows that LGB benefits from semantic representation when compared to a variant using continuous representations.\", \"**Baselines (R1, R2, R4)**\", \"Our problem requires agents to learn from two separate sources (language instructions and self-generated goals). This limits the use of existing algorithms as-is. In the design of our two baselines, we decided to integrate the defining features of state-of-the-art algorithms in variants of ours (see precisions below). This strategy has two benefits: 1) it helps control for confounding factors that can emerge from the use of a completely different architecture and code base; 2) it mitigates the problem of under-tuned baselines, as most components are shared across algorithms. Here we clarify the purpose of our baselines:\", \"The language-conditioned baseline (LB) is used to compare the LGB architecture (implemented as DECSTR) to standard language-conditioned algorithms, focusing on their performance on the three properties listed above. By comparing the two approaches on the same instruction-following task (turning 102 instructions into corresponding behaviors), we study the properties emerging from the decoupled LGB architecture. We are currently implementing this new version of the baseline. We will report the results and update the paper as soon as possible. We hope to show that, even if it succeeds in the instruction following task, it will not show behavioral diversity nor strategy switching behaviors, compared to DECSTR.\", \"Note that this baseline replaces the previous Language baseline, whose purpose was unclear. The new baseline implements defining features of IMAGINE (Colas et al., 2020), without goal imagination, but with an oracle reward function and can thus be seen as a state-of-the-art intrinsically motivated language-conditioned algorithm.\", \"The Position baseline. The purpose of this baseline is to demonstrate the benefits of decoupling via a semantic representation instead of a continuous one. In addition to the non-binary suggested by R2 (that did not help alone), we added another defining feature from Lanier et al., 2019 (multi-criteria HER) and an additional object-centered inductive bias, making this baseline closer to state-of-the-art goal-conditioned RL algorithms for block manipulation. We thank R2 for pushing us to spend more time on this baseline. The sensorimotor learning phase now demonstrates good results. The LGB architecture still seems to benefit from semantic representation: 1) for interpretability (it is more natural to ask \\u201cput the red block on the blue block\\u201d than asking \\u201cput the block 1 at (1.2, 0.9, 2.3)\\u201d); 2) for language acquisition 3) to facilitate opportunistic goal completion and 4) to acquire skill repertoires. Indeed the behavior of this baseline can be seen as a unique skill: placing blocks at their targets. This does not discriminate between different semantic skills.\"]}",
"{\"title\": \"General answer to all reviewers (1/3)\", \"comment\": \"*We answer here the comments shared by several reviewers. Additional comments specific to each reviewer are answered in specific answers.*\\n\\nWe sincerely thank all reviewers for their very useful feedback. \\n\\nMost reviewers acknowledged the relevance of our topic of interest (R1, R2, R4), found the approach well motivated (R1, R2, R4) with a strong related work section (R1, R4). R2 found the paper well written (although not well organized) and R4 acknowledged the quality of our experimental section that presents all ablations and a detailed study of generalization properties. \\nHowever, all the reviewers also agreed that the paper lacked a clear statement describing its main focus or contribution. In turn, this led to a poor organization of the method and experiment sections. We agree with all these comments and deeply reorganized the paper to answer these concerns. This answer defines the problem we aim to tackle, clearly states our contributions towards its resolution, and describes the reorganization of the methods and experimental sections to support our claims. \\n\\n**The problem (all)**\\n\\nOur main goal is to design agents that can learn both on their own and under the guidance of a (human) tutor. To learn on their own, these agents need to generate and pursue their own goals, and to learn from their own reward signals. To learn under the guidance of a tutor, they need to learn to fulfill language-based instructions after interacting with a tutor. \\n\\nMost current approaches cannot generate goals themselves, and require externally-provided rewards. Language-conditioned RL approaches, especially, almost always require external instructions and rewards. An exception is IMAGINE (Colas et al., 2020) which combines intrinsically motivated language goal generation and internal rewards. \\n\\nNevertheless, the direct conditioning of the policy on language inputs in language-conditioned RL approaches (IMAGINE included) imposes some limitations: \\n1. the agent cannot learn to behave before it starts acquiring language,\\n2. direct conditioning leads to low behavioral diversity for a given language input,\\n3. a direct consequence of 2) is that the agent cannot switch strategy for a given instruction.\", \"by_contrast\": \"1) pre-verbal infants demonstrate goal-directed behaviors (Mandler, 2012); 2) humans can find a diversity of ways to fulfill an instruction; 3) they can switch strategies if the first strategy failed.\\n\\n**Our contributions (all)**\\n\\n**The main contribution** of this paper is to present a new intrinsically motivated RL architecture called Langage-Goal-Behavior (LGB). LGB tackles the problem above and demonstrates the three properties. It differs from standard language-conditioned RL by the introduction of an intermediate semantic goal representation (G) between language inputs (L) and behavior (B). This intermediate representation allows the decoupling of language and behavior. Agents can either learn autonomously to target semantic configurations **or** learn to follow instructions by mapping language-based instructions to their semantic goal representation space. We argue LGB demonstrates the 3 properties.\\n\\n**Our second contribution** is the DECSTR learning algorithm: a particular instance of the LGB architecture for manipulation domains. The paper argues that the 3 properties above emerge from the LGB architecture, given sufficiently efficient components: 1) a semantic representation that characterizes interesting behaviors; 2) an intrinsically motivated goal-conditioned RL algorithm that can learn to reach semantic configurations and 3) a language-conditioned goal generator with good precision and recall. DECSTR illustrates that, when these conditions are met, the three properties emerge in the system. We do not claim that DECSTR is the most efficient instance of LGB, and future LGB implementations may benefit from improvements in the fields of goal-conditioned RL and/or generative modelling.\\n\\nFinally, some technical aspects of the DECSTR implementation can be seen as additional but minor contributions: the novel curriculum learning strategy and the language-conditioned goal generation module based on C-VAE.\\n\\nWe reframed the paper along these lines. This clearly states the problem targeted by our system (answering R2), what our contributions are (answering R1 and R3) and better focuses the paper.\"}",
"{\"title\": \"The research question and the main contributions are not clear.\", \"review\": \"This paper introduces DECSTR, which is an agent having a high-level representation of spatial relations between objects. DECSTR is a learning architecture that discovers and masters all reachable configurations from a set of relational spatial primitives. They demonstrated the characteristics in a proof-of-concept setup.\\n\\nIn the introduction, the inspiration obtained from developmental psychology is described. Motivation and background are broadly introduced. A wide range of related works are introduced in section 2.\\nThe motivation and target of this paper are ambitious and important.\\n\\nHowever, from the \\\"methods\\\" part, i.e., section 3, this paper is hard to follow. \\nThe supplementary material helps to understand. However, I believe some of the informative and detailed information in the supplementary material should come to the main manuscript.\\n\\nThe proposed method, i.e., DECSTR, is comprised of many components. Therefore, the main contribution is also not clear. # What is the main argument of the paper?\\nExperimental conditions are also hard to follow.\\n\\nIn evaluation, Figure 1 shows ablation studies alone, i.e., comparison with the variants of DECSTR.\\nTherefore, the contribution of the paper is hard to grasp.\\n\\nWe can understand what kind of task is achieved in this paper.\\nCurrently, the paper somehow seems to be a demonstration of DECSTR. \\nIn this sense, if the authors state research questions, challenges, and contributions of this paper more clearly, that will make this paper more impactful.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Well motivated, sum greater than its parts, but some concerns with baselines\", \"review\": \"**Summary**\\nThis paper proposes DECSTR, a goal-driven RL framework where the goal is represented as a binary vector that encodes the semantic relationships between objects. The state is assumed to contain disentangled features for each of the objects (and other features relating to the agent\\u2019s end-effectors). The architecture is based on Deep Sets (Zaher et al., 2017), which allows the pairs of the objects to be encoded with a shared network. The paper also introduces a curriculum learning strategy similar to CURIOUS (Colas et al., 2019), which relies on metrics such as competence and learning progress (LP) in order to select goals to pursue during an episode. One key difference is that unlike CURIOUS which uses expert-defined \\u201cgoal buckets\\u201d, DECSTR groups the goals based on recency of discovery. Once trained to be able to behave with respect to these semantic relationship goals, the second phase is language grounding. They learn a module (implemented as C-VAE) that converts from natural language text to the semantic configuration goal space. Experiments were conducted in the Fetch Manipulate robotic arm environment and compared with ablations of DECSTR without some of its components, demonstrating strong performance and generalization to various types of language instructions. \\n\\n**Pros**:\\n- The paper is well-motivated, citing literature from several fields.\\n- The sum is greater than its parts: many components in DECSTR are based on existing works (e.g. Deep Sets, C-VAE, using LP for intrinsically motivated goals, etc.), but empirically they have shown through ablations that all of their components were necessary for the agent to solve the Fetch Manipulation task successfully. \\n- The experiment sections are fairly thorough, with ablations on the components of their methods (as said above), and various kinds of language command generalization evaluations (in a similar style to IMAGINE (Colas et al., 2020). \\n- The interpretability of the semantic goal space aspect is interesting. And being able to have the agent explicitly maps from the natural language text to the semantic goal space also helps us debug/understand what the agent is thinking at inference time\\n\\n**Cons**:\\n- Part of the thesis is that decoupling of sensorimotor learning from language acquisition is advantageous to an end-to-end language to sensorimotor learning. I have concerns/clarification about some of the baselines, which might not have been a fair comparison with DECSTR (see question 1 & 2 below)\\n- Some parts of the method are unclear/vague without reading the appendix section to get the full detail. I understand that is due to the space limitation issue and because there are so many components to DECSTR. (see question 3)\\n\\n**Recommendation**:\\n \\nOverall, I vote for marginally below acceptance threshold in the current form. As mentioned in the strengths section, I do like the motivation of the paper and the strong performance of the method. But I am also suspicious of the poor performance of the baselines (e.g. Figure 1c), which may be due to not having HER, instead of their proposed contributions. It would be good if the authors can clarify that concern. \\n\\n**Question**:\\n1. In Figure 1c, for the Language Goals baseline, was HER applied to the Language Goals in this case (i.e. similar to ACTRCE (Chan et al., 2019), IMAGINE (Colas et al., 2020)? Similarly, was HER applied to the Position-Goals baseline? If not, then it is possible the difference in performance between DECSTR and these baselines may be due more to HER than due to the difference in goal representation. \\n2. Would it be possible to train Phase 1 and Phase 2 together or in an end-to-end fashion? This would provide a \\u2018coupled\\u2019 version that is different from any of the baselines studied in the paper because it still uses the semantic configuration as the intermediate goal representation while having joint training of the language representation and the sensorimotor. If this baseline struggles to learn (possibly due to difficult optimization/local minimas), then this will help further strengthen the thesis of the importance of decoupling the learning process into two distinct phases. \\n3. Section 3.2: the main text and appendix C.2 was not very clear about the second inductive bias for the symmetry of the behavior required to achieve $above(o_i, o_j)$ and $above(o_j, o_i)$. Are you saying, for example, if we are trying to have object 1 above object 2, then we specify the goal in the form $g_1$, while if we want object 2 above object 1, then we specify the goal in the form $g_2$? \\n\\n**Minor comments**:\\n* When using double quotes in latex, use backticks for the opening quote.\\n\\n**After rebuttal responses**: \\n\\nI have read the authors\\u2019 updated draft and response to my concerns, as well as the other reviews. The updated paper provides a clearer framing and some missing baselines have also been included. I raised my evaluation to a weak acceptance for the paper.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"An interesting approach, limited by domain-specificity and poor paper organization\", \"review\": \"This work proposed DECSTR, a procedure for encouraging intrinsic motivation via an intermediate semantic state-space representation. The authors propose an intermediate semantic state space that the intrinsically motivated agent learns to explore. For the environment provided (a 3-block system), the agent fully explores the symbolic state space, reaching all feasible symbolic states. In the second part of the work, the authors train a language model capable of proposing symbolic goals (in the form of symbolic states) from natural language input and shows that the previously-intrinsically-motivated agent can now be made to reach these goals, demonstrating that the symbolic-goal-conditioned policy is sufficient for instruction following in their 3-block domain.\\n\\nThe work is generally interesting, and seems to address a simple version of a broader class of problems that embodied agents typically struggle with, particularly in the absence of clear goals. However, the approach presented in the behavior (in particular the form of the semantic representation that is claimed as one of the primary contributions of the work) is very specific to the single problem used for demonstrations in the paper, limiting the potential impact of the work.\\n\\nFirst---and I think the most significant issue with the submission---is that many critical experimental details are included only in the lengthy appendix. Much of this information, including the information provided to the learning algorithm at every step and how that information is encoded such that it allows for a relatively object-agnostic representation, is only available in sufficient detail in the appendix. Relatedly, visualizations of the approach and experimental setup also only appear in the appendix, yet are extremely helpful (if not essential) for understanding. Detail critical to understanding the approach should be included in the body of the text.\\n\\nSecond, it is unclear exactly what problem is being solved in this work or what its primary contribution is. A clearer statement of its motivations will be necessary before publication. /What problem is the robot or system designers trying to overcome?/ Right now, the paper seems to come up with three potential answers to this question, none of which necessarily rises above the others. Here are what I think the main contributions of the work could be:\\n\\n1. *The proposed semantic representation* The semantic goal representation used to define the space of intrinsic motivation seems to be a novel contribution. However, if the paper were to focus on this aspect of the contribution, it would need to do a better job understating why this representation were useful beyond a relatively small manipulation task. Critically: using only one problem setting with only three blocks is insufficient to convince the reader that this representation is useful more generally (as might be suggested by much of the talk about Inductive Bias).\\n2. *State of the art state-space exploration in intrinsic motivation.* This might be true, though I find such a thing hard to measure. In addition, it seems that many if not all of the tools used in the learning process are not novel. (Perhaps a combination of this and point 1. is the primary contribution.)\\n3. *State of the art performance on language-driven block manipulation tasks.* This might be true as well, but the results are so-far unconvincing. All baselines are varied forms of the proposed agent, which makes it difficult to compare against other approaches (e.g. something like Li et al 2019).\\n\\nThe paper currently seems to claim that the combination of progress in these three areas is a novel contribution; I am sympathetic to this idea (as I do not believe that every paper needs to be \\\"state of the art\\\" in one single thing), though it is sufficiently unclear at the moment what the takeaway message of the paper is that I cannot recommend it be published in its current state. In particular, the authors need to work on honing the message of the paper. It is also not unlikely that one or two more experiments will need to be added to support the focused narrative.\\n\\nSmaller comments\\n- The name of algorithm should appear in the body of the text, not a footnote. Relatedly, it is unclear how the proposed approach uses the \\\"Deep Sets\\\" work in such a way that it justifies inclusion in the name of the proposed technique.\\n- The paper/Introduction would benefit from a summary of contributions: even after reading, it may not be clear to a reader which contributions are from this paper versus other work.\\n- Relatedly, much of the discussion of Inductive Biases that appear throughout the paper is of mixed relevance for this work. On the one hand, it is clear how the idea of an object-centric inductive bias helped to inform how the input to the neural network was encoded in a way that might allow the agent to apply its knowledge learned between two of the objects to a policy that allows it to manipulate all three. However, the goal condition is necessarily specific when it comes to representing which objects to which each element it refers. The structure of the goal and the semantic relations it encodes are quite specific to the particular problem at hand, and it is\\n- The reward for the \\\"Position only\\\" baseline seems artificially constructed: a non-binary reward function would likely allow the system to learn more easily. As of now I am unconvinced that the authors have worked hard enough to make a fair baseline for comparison. This is particularly problematic since this baseline is a key motivator for the existence of the proposed semantic goal representation.\\n- The paper overall is quite well written, despite relegating too much information to the abstracts.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"The paper presents a method for two-stage self-supervised RL, where an agent first acquires semantic concepts and second grounds language tokens to these concepts (object relationships, primarily). While motivated by language grounding, the training and evaluation paradigms do not include any natural language input from human users, and object relationships are limited to two binary predicates between three objects.\", \"review\": \"The DECSTR system's intrinsic motivations may be applicable to other application domains, depending on how objects and relations are enumerated. This potential is not explored beyond the toy environment presented. The learning methods (especially inductive biases) are hand-crafted based on human-level knowledge about semantic predicates, but only two (\\\"above\\\" and \\\"close\\\") are demonstrated. Without demonstrating the system on any other configuration or world, it's difficult to tell whether it's able to solve only the problem it's been crafted to solve in this specific environment.\", \"questions\": \"3.1 \\\"in principle ... could use any other combination of binary predicates and could be extended to use n-ary predicates\\\" this claim is not demonstrated in the paper, and in 3.2 the inductive biases seem bespoke crafted for binary predicate 'above' which has particular symmetry. Would similar careful design of inductive biases be necessary and possible for n-ary predicates that do not demonstrate these as easily (e.g., \\\"topmost\\\")? What about predicates that involve an unspecified number of discrete arguments, like \\\"base\\\" -> holding up an indefinite N of other objects/structures in \\\"use the green block as the base\\\".\\n\\n3.4 \\\"or is union\\\" this doesn't generally hold for natural language. A statement like \\\"put the red block or the green block above the yellow block\\\" does not mean to put both red and green (union of goals) above yellow. Typically langauge \\\"or\\\" is \\\"xor\\\"; is the notion of \\\"or\\\" here not given in language or not meant to represent human language?\", \"areas_for_improvement\": \"5 \\\"a learning architecture that discovers and masters all reachable configurations from a set of relational primitives\\\" this is literally true but only demonstrated on 'a' single set of relational primitives, so it feels like overclaiming.\", \"nits\": [\"double citation for Mandler, 2012 in intro in adjacent sentences can be condensed to once\", \"footnotes on other side of period\", \"\\\"Besides\\\" in \\\"Blocks Manipulation\\\" seems a bit off-sounding; maybe \\\"In addition,\\\"?\", \"typo section 3 \\\"based o abstract\\\"\", \"Typo section 5 backwards quotes \\\"overlapping waves\\\" LHS.\", \"\\\"Caregiver\\\" in section 5 is an unintroduced role. The rest of the paper does not frame DECSTR or the oracle generator this way.\", \"Ending the paper with \\\"etc.\\\" feels weird/informal.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
tL89RnzIiCd | Hopfield Networks is All You Need | [
"Hubert Ramsauer",
"Bernhard Schäfl",
"Johannes Lehner",
"Philipp Seidl",
"Michael Widrich",
"Lukas Gruber",
"Markus Holzleitner",
"Thomas Adler",
"David Kreil",
"Michael K Kopp",
"Günter Klambauer",
"Johannes Brandstetter",
"Sepp Hochreiter"
] | We introduce a modern Hopfield network with continuous states and a corresponding update rule. The new Hopfield network can store exponentially (with the dimension of the associative space) many patterns, retrieves the pattern with one update, and has exponentially small retrieval errors. It has three types of energy minima (fixed points of the update): (1) global fixed point averaging over all patterns, (2) metastable states averaging over a subset of patterns, and (3) fixed points which store a single pattern. The new update rule is equivalent to the attention mechanism used in transformers. This equivalence enables a characterization of the heads of transformer models. These heads perform in the first layers preferably global averaging and in higher layers partial averaging via metastable states. The new modern Hopfield network can be integrated into deep learning architectures as layers to allow the storage of and access to raw input data, intermediate results, or learned prototypes.
These Hopfield layers enable new ways of deep learning, beyond fully-connected, convolutional, or recurrent networks, and provide pooling, memory, association, and attention mechanisms. We demonstrate the broad applicability of the Hopfield layers
across various domains. Hopfield layers improved state-of-the-art on three out of four considered multiple instance learning problems as well as on immune repertoire classification with several hundreds of thousands of instances. On the UCI benchmark collections of small classification tasks, where deep learning methods typically struggle, Hopfield layers yielded a new state-of-the-art when compared to different machine learning methods. Finally, Hopfield layers achieved state-of-the-art on two drug design datasets. The implementation is available at: \url{https://github.com/ml-jku/hopfield-layers} | [
"Modern Hopfield Network",
"Energy",
"Attention",
"Convergence",
"Storage Capacity",
"Hopfield layer",
"Associative Memory"
] | Accept (Poster) | https://openreview.net/pdf?id=tL89RnzIiCd | https://openreview.net/forum?id=tL89RnzIiCd | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"WLfnFWzoqnp",
"NIctU6e5MxV",
"e432F1tQpzC",
"01xnbsXe2b1",
"m4JXwVy-pCR",
"DzD5Lil_BJT",
"z4pH0FlkxHI",
"d0DwejhNgq"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040493188,
1605631185562,
1605631048112,
1605630856360,
1605630702819,
1604181668882,
1604079601211,
1603768277433
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3489/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3489/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3489/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3489/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3489/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3489/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3489/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Poster)\", \"comment\": [\"The novelty of the paper are:\", \"introduces a new Hopfield network with continuous states, hence can be learned end-to-end differentiation and back propagation.\", \"derives efficient update rules\", \"reveals a connection between the update rules and transformers\", \"illustrate how the network can be used as a layer in deep neural network that can perform different functions\", \"The presentation was clear enough for the reviewers to understand and appreciate the novelty, although there were a few points of confusion. I would recommend the authors to address several suggestions that came up in the discussions including:\", \"additional analysis to highlight when and how the networks is able to outperform other competing models\", \"intuitions about the proofs for the theorems (okay to leave the detailed derivation in the appendix)\"]}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thank you for your helpful review. Please, let us expand a bit on the weaknesses you pointed out.\\n\\n\\n* \\u201cnot enough detail in this paper for me to understand how the models are implemented or why the model works better than other approaches.\\u201d & \\u201cFor example, section 3 declared 3 types of Hopfield layers, but without any formal definitions to them\\u201d: Sorry for the bad description. We massively extended Section 3, to describe (i) our main goal, (ii) how Hopfield networks are integrated into deep learning architectures, and (iii) how the layers are designed. We also give possible applications of the layers. Then we refer to the experiments, where the layers are used.\\nSince gradients have to be propagated through these layers, we aim at obtaining continuous Hopfield networks that are differentiable and can retrieve by one update step. One update is equivalent to updating a layer in a neural network.\\n\\n* \\u201cwhy the model works better than other approaches.\\u201d & \\u201cbut lacks any analysis of why the proposed models work better\\u201d:\\n We now give explanations why the proposed models work better. In particular, we show that Hopfield layers can realize k-nearest neighbor (with a learned distance metric), SVM-like methods (storing support vectors or reference vectors), similarity-based methods (similarity to stored patterns), learning vector quantization (stored patterns are the centers of clusters), etc. We also show that the transformer model can readily be implemented. Furthermore, the layers can perform simple pooling operations or store the elements of a time series, therefore, can replace LSTM or GRU layers.\\n\\n* \\u201clack of motivation in the introduction section.\\u201d: \\nWe added a contribution paragraph to the introduction and massively extended Section 3. We now write in the abstract: \\u201cThese Hopfield layers enable new ways of deep learning, beyond fully-connected, convolutional, or recurrent networks, and provide pooling, memory, association, and attention mechanisms.\"}",
"{\"title\": \"Response to Reviewer 4\", \"comment\": \"Thank you for a very insightful review that helps us to improve our paper.\\n\\n\\n* \\u201cclarity about the optimization in the new proposed variant of hopfield networks\\u201d:\\n We massively extended Section 3, to describe (i) our main goal, (ii) how Hopfield networks are integrated into deep learning architectures, and (iii) how the layers are designed.\\n\\n* \\u201cmotivation behind update equations\\u201d: \\nIn the appendix \\u201cA.1.3 NEW UPDATE RULE\\u201c, we give in Eq. (29) for comparison, the synchronous update rule for the classical Hopfield network with threshold zero, which is very similar to our update rule but without the softmax. In appendix \\u201cLemma A18\\u201d, we show that the softmax is the derivative of the Log-Sum-Exp Function.\\n\\n* \\u201cintuition behind convergence in one update\\u201d: \\nSorry, we mixed up mathematical convergence and retrieval, which is the convergence in praxis. We now separated mathematical convergence from retrieval (being close to a fixed point). We now define retrieval by an update that comes epsilon-close to the fixed point. Random patterns are mentioned.\\n\\n* What happens to the updates / optimization when the patterns are not well separated?: \\nWe discuss this case in the paragraph \\u201cMetastable states and one global fixed point.\\u201d We write: \\u201cIf some vectors are similar to each other and well separated from all other vectors, then a metastable state near the similar vectors exists. Iterates that start near the metastable state converge to this metastable state, also if initialized by one of the similar patterns.\\u201d In this case these similar patterns are retrieved collectively. We write further: \\u201cIf no pattern is well separated from the others, then the iteration converges to a global fixed point close to the arithmetic mean of the vectors.\\u201d \\n\\n* \\u201cIs the trend in Fig 2 observed across more or less across all datasets?\\u201d: \\nFig. 2 (now moved into the appendix) is specific to the transformer architecture and exemplary NLP tasks. We give in the appendix additional examples for this trend.\\n\\n* Other comments \\u201cmax-margin classifiers / kernel methods\\u201d: \\nWe now give the connections to SVMs as the stored patterns can serve as support vectors. However, we do not see an obvious relation to max-margin classifiers.\\n\\n* Other comments \\u201cnon-linear transformations\\u201d: \\nNon-linear activation functions been used for the experiments in immune repertoire classification. We clarify that more, in particular in the appendix.\\n\\n* \\u201cusing them in large scale language modeling tasks where transformers are popular right now.\\u201d: \\nWe have shown that the transformer attention mechanism is exactly the update rule of a modern Hopfield network. The transformer architecture is one example of applying our approach with modern Hopfield networks. The layer Hopfield together with residual connections (skip connections) gives the self-attention layer of the transformer. Also the encoder-decoder attention layer of the transformer can be realized, where Y comes from the encoder and R from the decoder. Also the layernorm is supplied automatically for the Hopfied layer by our Pytorch implementation. Since there are already many experiments with transformers, we focus on new tasks that can be solved with Hopfield networks. \\nThe attention mechanism of transformers is just an associative memory, where queries serve to retrieve keys that are stored. However, we can supply more functionalities by other Hopfield layers. Therefore, we focus on experiments, where these new architectures have not been tested. We achieved many new state-of-the-art for different datasets.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": [\"Thank you for a very elaborate review that helps us to improve our paper. We address the points individually.\", \"\\u201cI was left me wondering about the added value of this new model\\u201d:\", \"The main goal of the paper is to integrate associative memories (the Hopfield layers) into deep learning architectures as layers. Therefore each layer can store and access raw input data, reference data, intermediate results, or (learned) prototypes. These Hopfield layers enable new ways of deep learning, beyond e.g. CNNs or recurrent networks. Hopfield layers can be used for multiple instance learning, point sets, or learning to process sequences. They enable the substitution of k-nearest neighbor, support vector machine models, or learning vector quantization in each layer separately. Also the transformer\\u2019s self-attention and encoder-decoder attention are examples of Hopfield layers, but the interpretation as an associative memory is novel.\", \"\\u201cIt was not clear to me what is gained by this greater complexity and whether the gains justify the larger complexity.\\u201d:\", \"A continuous Hopfield network (not discrete) is necessary in order to enable end-to-end differentiable models. The continuous Hopfield network is integrated as a special layer in deep learning architectures, where backpropagation requires this layer to be differentiable. Therefore, we also investigate if one update is sufficient for being close to the fixed point. Integrated into a deep learning architecture, only one Hopfield update step should be performed, which is equivalent to updating a layer in a neural network. The reviewer might be right in their assumption that discrete networks might also do the job and the continuous models are the discrete models in disguise. However, it is not clear how to learn the weights that map to the embedding space, where the Hopfield network stores and retrieves patterns. We massively extended Section 3, to describe (i) our main goal, (ii) how Hopfield networks are integrated into deep learning architectures, and (iii) how the layers are designed.\", \"\\u201cbreaking their long paper to two different sections, one presenting the theoretical advantages of their new model and the other focusing on practical benefits\\u201d:\", \"Thanks for this advice. We do that: Section 2 is dedicated to theoretical considerations and Section 3 to practical / implementation details. We now massively extended Section 3.\", \"\\u201cComment 2\\u201d and \\u201cthe nature of convergence to a fixed point wasn't clear to me\\u201d & \\u201cconverge in one update step\\u201d:\", \"Sorry, we mixed up mathematical convergence and retrieval, which is the convergence in praxis. We now separated mathematical convergence from retrieval (being close to a fixed point). We now define retrieval by an update that comes epsilon-close to the fixed point. Random patterns are mentioned.\", \"\\\"Comment 3: proven for c= 1.37 and c= 3.15 in Theorem 3\\\": Sorry for the ambiguous formulation. It is proven if the assumptions in Theorem 3 are fulfilled, reasonable settings which fulfill the assumptions are given for c=1.37 and c=3.15. This has been corrected.\", \"\\u201cComment 4: true for random patterns\\u201d: Yes. We mention that now.\", \"\\u201cComment 5: Is beta>0\\u201d: Yes. We mention that now.\"]}",
"{\"title\": \"Thank you for your feedback!\", \"comment\": \"We thank all reviewers for their time and for their constructive feedback. It helped us a lot to improve our paper. We hope to answer all questions and provide clarifications in individual responses to the respective reviewers. Further, we uploaded a rebuttal revision of our paper incorporating your sound suggestions. Concretely, we massively extended Section 3, to describe (i) our main goal, (ii) how Hopfield networks are integrated into deep learning architectures, and (iii) how the layers are designed.\"}",
"{\"title\": \"Interesting results but some questions arise\", \"review\": \"This paper considers a continuous version of the classical Hopfield network (HN) model.In contrast to well studied discrete models where the patterns (vectors) that are\\nstored are discrete, this paper studied continuous vectors and a new continuous energy function.\\nConvergence results to a fixed point are proven for the new rule, and it is shown that for the case of random patterns, the Hopfield network can memorize exponentially many patterns (with high probability).\\u00a0 Finally several implementations are given showing how incorporating the new Hopfield net in classification tasks can improve classification accuracy in regimes where \\ndata is scarce and where neural networks do not fare well. \\n\\nThe paper is rather long and I did not verify all results. The description appears sound.The proofs appear non-trivial and rather technical. While the results here are nontrivial I was left me wondering about the \\nadded value of this new model. One of the biggest advantages of HN was its simplicity and elegance. More recent results of Hopfield and others with higher degree energy functions managed to maintain this clarity and brevity. The new model however is significantly more involved. It was not clear to me what is gained by this greater complexity and whether the gains \\njustify the larger complexity. In actual implementations very limited precision is often necessary.How does this discretization influence the continuous model? How robust is it to rounding errors? Don't we get \\\"old\\\" discrete models in disguise? \\n\\nThe (impressive) empirical results raise similar questions. Can't we use old discrete HN instead of the new model and achieve similar results? It would be perhaps more informative to compare different HN to the new model presented in this paper. It seems a bit strange that previous uses of HN (discrete ) did not achieve such an improvement in previous studies. It would be beneficial to add more on related work in this area. \\n\\n The authors might consider breaking their long paper to two different sections, one presenting the theoretical advantages of their new model and the other focusing on practical benefits. \\n\\nFinally, the nature of convergence to a fixed point wasn't clear to me. It seems likely that if patterns are not random convergence can take a long time as is the case for discrete HN.\", \"some_recent_work_about_the_complexity_of_finding_fixed_points_of_continuous_functions_may_be_relevant_here\": \"A converse to Banach's fixed point theorem and its CLS-completeness.\", \"more_specific_comments\": \"1) The paper starts with a rather lengthy discussion of previous work. \\nI would recommend outlining the contributions of this paper earlier on. \\n2) \\\"converge in one update step with exponentially low error and have storage capacity proportional to...\\\" It was not clear to me that random patterns are considered here. \\n3) \\\"proven for c= 1.37andc= 3.15 in Theorem 3\\\" for what c exactly is the result proven? \\n4) \\\"Furthermore, with a single update, the fixed point recovered with high probability\\\"I presume this is true for random patterns? \\n5) Is beta>0?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Paper makes good technical contribution draws interesting connections between classical Hopfield networks and Attention Mechanism in transformers\", \"review\": \"The paper introduces a new Hopfield network which have continuous states and propose update rules for optimizing it. It also draws connections between the new model and attention mechanism used in transformers. Small scale empirical study is presented.\\n\\nOverall I like the technical contribution of the work but feel the paper could be revised to improve clarity about the optimization in the new proposed variant of hopfield networks. Below some specific comments:\", \"pros\": [\"connecting hopfield networks to attention mechanism and drawing out the variants in section 3 (as hopfield layers) is useful\", \"The exposition in section 1 and 2 where the authors describe the hopfield network with continuous states is written well (although I do feel the motivation behind update equations could be explained a bit better)\"], \"cons\": [\"As I mentioned earlier, I don't fully understand the intuition behind convergence in one update. Can the authors clarify this? Also the paper mentions update rule in eqn (5) converges after one update for well separated patterns. What happens to the updates / optimization when the patterns are not well separated? This should be discussed after equation (5). Maybe present different scenarios to make it clear.\", \"Empirical study is limited in my opinion and can be improved. Is the trend in Fig 2 observed across more or less across all datasets? Can the authors comment on this? I like the visualization in the figure but it is bit hard to interpret (perhaps a more clearer label for it could help with that).\"], \"other_comments\": [\"The idea of separated patterns leads me to ask this question: is there any connection of this work to max-margin classifiers / kernel methods?\", \"Did the authors consider what would happen if non-linear transformations (e.g. activation functions in DNNs) are applied on top of the inputs? How does the existing network change in that case?\", \"Can the authors comment on the utility / challenges in applying their proposed method on datasets / tasks beyond the small scale UCI datasets used in their experiments? e.g. using them in large scale language modeling tasks where transformers are popular right now.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Very interesting but missing critical content\", \"review\": \"This work extends the binary Hopfield network (Demircigil et al., 2017) to continuous patterns and states. Connections are drawn between the result model to the attention layers of the transformers, the pooling operation of LSTM, similarity search, and fully connected layers. Experimental results are briefly described for analyzing the attention of Bert models, multiple instance learning, and small UCI classification tasks.\\n\\nThe proposed model seems very interesting, and the proposed applications seem reasonable at a very high level. However, there is just not enough detail in this paper for me to understand how the models are implemented or why the model works better than other approaches.\\n\\nFor example, section 3 declared 3 types of Hopfield layers, but without any formal definitions to them, or how they are integrated to the proposed models. The experiment section compares performances with existing models, but lacks any analysis of why the proposed models work better. Similarly, there is a lack of motivation/intuition in the introduction section.\\n\\n## After author feedback ##\\nThanks for the paper update, and now I have a better understanding of the proposed approach. I have updated my review to the following:\\n\\nPreviously Widrich+ (2020) showed that integrating transformer-like attention (or equivalently modern Hopfield networks based on softmax) into deep learning architectures outperforms existing methods (kNN and logistic regression) for massive MIL such as immune repertoire classification. More specifically a pooling layer can be formed by attending over a repertoire of instances with a fixed (but learnable) query vector.\\n\\nThis work provides theoretical analysis of such a layer for its energy function, convergence of updates, and storage capacity, and points to directions of how such a layer can be understood and controlled. It extends the previous experiment:\\n1) apply HopfieldPooling (attention with fixed learnable query Q) to more MIL datasets (animal image and breast cancer) and achieve state of the art results. \\n2) apply Hopfield (attention) to 75 small UCI benchmarks replacing feedforward nets. Here Selu units (Klambauer+ 2017) are used to map input to storage Y and query R. The result is quite positive beating previous approaches including SVM, random forest, and SNN (Klambauer+ 2017)\\n3) apply HopfieldLayer (attention with fixed training data Y as storage) to 4 drug design tasks acting as an instance-based learning approach.\\n\\nThe result seems quite interesting indicating that general purpose layers such as feedforward, pooling and nearest neighbors can be improved (in terms of robustness, learnability, or controllability) by adding attention like operations.\\n\\nI think the paper can talk less about existing results, and focus more on the new results and their analysis:\\n- remove [Immune Repertoire Classification] result since it is from previous work.\\n- move the Drug Design experiment details to the main text, and add some comment about under what condition Hopfield outperforms/underperforms RF.\\n- for the UCI benchmark experiment the transformer layer (Vaswani+ 2017) seems to be a natural baseline and should be compared to.\", \"suggestions_for_the_presentation\": [\"Should only in the future work section state that Hopfield can potentially substitute LSTMs or GRUs, since it is all hypothetical with no experiment result at this point.\", \"The word \\\"implemented\\\" in Section 4 seems misleading as there is nothing changed in the Bert model structure? \\\"Transformer and BERT models can be implemented by the layer Hopfield.\\\"\", \"Can be more specific in descriptions. For example in the description of (2) Layer HopfieldPooling and (3) Layer HopfieldLayer in Section 3, R and W_K can be referenced again for \\\"state (query) patterns \\\" and \\\"The stored (key) patterns\\\" respectively.\", \"It is probably more informative to replace figure 1 with a table to directly compare the energy function and updating rules of different Hopfield nets--i.e., classical, exponential and attention.\", \"Avoid using \\\"x\\\" in equation 1, since the symbol has already been used for the stored patterns.\", \"\\\"HopfieldLayer\\\" seems to be a very strange name.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
34KAZ9HbJco | Adapt-and-Adjust: Overcoming the Long-tail Problem of Multilingual Speech Recognition | [
"Genta Indra Winata",
"Guangsen Wang",
"Caiming Xiong",
"Steven Hoi"
] | One crucial challenge of real-world multilingual speech recognition is the long-tailed distribution problem, where some resource-rich languages like English have abundant training data, but a long tail of low-resource languages have varying amounts of limited training data. To overcome the long-tail problem, in this paper, we propose Adapt-and-Adjust (A2), a transformer-based multi-task learning framework for end-to-end multilingual speech recognition. The A2 framework overcomes the long-tail problem via three techniques: (1) exploiting a pretrained multilingual language model (mBERT) to improve the performance of low-resource languages; (2) proposing dual adapters consisting of both language-specific and language-agnostic adaptation with minimal additional parameters; and (3) overcoming the class imbalance, either by imposing class priors in the loss during training or adjusting the logits of the softmax output during inference. Extensive experiments on the CommonVoice corpus show that A2 significantly outperforms conventional approaches. | [
"speech recognition",
"multilingual",
"long-tail",
"adapter",
"logit adjustments"
] | Reject | https://openreview.net/pdf?id=34KAZ9HbJco | https://openreview.net/forum?id=34KAZ9HbJco | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"6FEB9sWzHh",
"3MzOA4plY2L",
"p8V0rBcaGou",
"ajWdOOp-c1_",
"_AwUao5Z5PH",
"Abs6kJEPOIS",
"bMX4W3Hlc6D",
"J4Yr0-qZrQ",
"v8VKCulOBKd",
"ToY19jJfIo",
"_nuX9OJ12er",
"zbvalG-CEKQ",
"kjX4RNWIlA",
"WYGcuNmQyJv",
"um3uIS-vK0c",
"Jo24Y4cz6eX",
"A50BpS-ipA0",
"LX2AQdUe6io"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040377940,
1606276975994,
1606199430354,
1606199269996,
1606198998293,
1605798145899,
1605776280233,
1605773466247,
1605773404664,
1605772991155,
1605772510026,
1605772262233,
1605772126500,
1604555513972,
1604025596730,
1603984754858,
1603938079129,
1603897078356
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3487/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3487/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3487/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3487/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3487/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3487/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3487/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3487/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3487/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3487/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3487/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3487/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3487/AnonReviewer5"
],
[
"ICLR.cc/2021/Conference/Paper3487/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3487/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3487/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3487/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"As one of the reviewers' comment, the paper presents \\\"a mixed of tricks\\\" for the multilingual speech recognition, which includes 1) the use of a pretrained mBERT, 2) dual-adapter and 3) prior adjusting.\\nFirst, the relative gains of the pretrained mBERT is marginal (Section 3.3.1). Secondly, using 1) on top of 2) is unnecessary. \\nThese confuses the reader about what the conclusion of the paper is. \\nIt would be better if choosing one aspect of the problem and investigate it deeper. \\n\\nThe decision is mainly because of the lack of novelty and clarity.\"}",
"{\"title\": \"Continued Responses to Reviewer 3\", \"comment\": \"**Q: What is meant by the fourth bullet point in the contributions? Is there a new dataset? I do not understand the contribution**\\n\\nTo the best of our knowledge, there is no existing benchmark for ASR with a focus on long-tailed class distribution. CommonVoice is a public dataset with multiple languages. However, there is no standard partition of data for the long-tail distribution study. Therefore, our contribution lies in curating a subset of CommonVoice data as a benchmark for multilingual speech recognition. This work will help future researchers have a standard partition for benchmarking their multilingual ASR systems.\\n\\n**Q: The use of previous tokens as input, i.e. not using teacher forcing, during the later stages of training (Eq. 10) is unconventional. It would be more convincing if the author discussed this a little more, including why it improves quality.**\\n\\nScheduled sampling is widely used in training end-to-end ASR systems to reduce the mismatch of training and inference. During inference, the prediction of the current token depends on the previous ones due to the autoregressive decoder. During training, we can use teacher forcing to use the ground-truth label of the previous token for forward and loss computation. During inference, there is no label for the previous token, and we can only rely on the prediction and beam search to decode the current token. To reduce such mismatches, during later training stages, at a certain probability (e.g., 0.3), instead of using the ground-truth label, we use the previous token with the largest posterior to let the model handle such discrepancies, which is helpful for the final decoding.\\n\\n**Q: It's unclear how x_{CTC} is defined in fig 1. Is it the output of the encoder? Likewise it's unclear how the function f is defined in fig 1.**\\n\\nYes, x_{CTC} is the output. Function f is the linear function for computing the logits before applying the softmax. We have updated the paper to make it more clear.\\n\\n**Q: Fig 7 and comments to it should be moved to the main paper. It is essential for understanding how mbert is integrated into the decoder as that is a big part of the contribution.**\\n\\nWe moved Fig 7 to the main paper as Figure 3.\"}",
"{\"title\": \"Clarifying the misunderstanding in monolingual settings, continued (2)\", \"comment\": \"-While the best results in the revised Table 1 show some degradation of performance for \\u201cen\\u201d (21.6 vs. 22.0) with the same multilingual token set, the comparison of SMT and A2 is not fair. SMT is trained with random sampling, and A2s are trained with balanced sampling. A2 should be compared with the balanced sampling version of SMT given in the BS result row.\\n>**Your response:** I disagree with the claim that the comparison between SMT and A2 is \\\"not fair\\\" because of their training strategy. While, yes, the balanced sampling exasperates the differences between high and low resourced languages, there is no requirement or expectation for SMT to follow the A2 sampling strategy. It is a fair comparison to say that SMT is a reasonable multilingual baseline, and A2 should be attempting to surpass its performance, not surpass a weakened version of it. That said, the ablation results in Table 2 are clearly informative to how the Adapt step helps mitigate the impact of balanced sampling, and lead to the conclusion that A2 is able to provide further improvements to lower resourced languages. However, these improvements do come at a modest cost to higher resourced languages (as demonstrated by the comparison to SMT).\\n>>***Our new response:*** Thanks for the comment. We agree with your comment that SMT is a decent multilingual baseline. We have revised the paper to keep comparing A2 with it and acknowledge the modest cost for en and fr in the text.\"}",
"{\"title\": \"Clarifying the misunderstanding in monolingual settings, continued (1)\", \"comment\": \"> **Your response:** I would like to avoid assigning intention to this modification, and I would hope that it is in the interest of a more transparent understanding of model behavior rather than seeking to present the proposed approach in a more positive light by reporting worse baseline performance.\\nI can't think of any good reason to remove the initial monolingual baseline numbers. If these results -- monolingual modeling to multilingual targets -- are a useful point of comparison between the monolingual baseline and the A2 results, then they should be included as well, but not instead of the monolingual baseline. The author's could then attribute the monolingual regressions to the difficulty of making predictions to a larger and more complicated target set. The original observation that A2 is substantially worse than monolingual training on 'en' and 'fr' should remain.\\n>> ***Our new response:*** Thank you for your response. Here we would like to clarify some misconfigurations in our monolingual study.\\nFirstly, we would like to emphasize that the change is purely for the sake of fair comparison and would like to know how different token sets would impact the model behavior. The purpose of the A2 is not to significantly outperform the monolingual performance for all languages, especially the higher resourced languages; rather, we want to achieve a better-balanced error performance among all languages. We replaced the first version of monolingual results because initially, when we prepared the first response to your comments, we thought the new monolingual setting only differed from the first version in terms of token set. We know how unethical it is to deliberately hide the model limitations with misleading results, and that is certainly not our intention.\\n\\n>> Having read your new responses and to avoid misunderstanding, we\\u2019ve decided to put both sets of results in the table. To explain the gaps of two token sets, we performed another round of inspection of our two sets of monolingual studies. We found a significant factor we overlooked before; in addition to the token set differences, we wrongly reported the initial monolingual baseline results obtained from the monolingual models trained with a much larger data set. These models were initially trained for our initial plan of investigating the unsupervised representation learning for multilingual ASR on the CommonVoice data before we embarked on the A2 framework. For example, \\u201cen\\u201d was trained on 878 hours (CER 13.3) of data instead of 80 hours (CER 21.6), and \\u201cfr\\u201d was trained on 273 hours (11.5) of data instead of 40 hours (19.8).\\n\\n>>For the long-tail problem presented in the paper, we curated a new subset of the original multilingual data with careful partitioning of the train, dev, and test sets for each language. The reason for curating such a subset for our long-tail study is mainly for shorter experiment turnaround time without compromising the long-tail language and word piece class distribution. Since we are working with 11 languages and each multilingual model training takes 2-3 days with a single GPU, but with larger data, we need more than two weeks for each setting with our GPU machines. For now, we would like to clarify this in the revised paper, and the reason why there are significant gaps between the two monolingual settings is because of the training data size. We will produce a new set of monolingual results with the same amount of training data as the multilingual (e.g., 80 hours) but with monolingual tokens rather than the subset of multilingual tokens. We have kept the best monolingual results trained from the much larger training data together with the training data sizes in the Table 1 for comparison. In addition, we would also conjecture based on the current study that even if the same amount of the larger training data as the best monolingual setting were used for multilingual training, our A2 model could still maintain a similar performance gain over the SMT baseline since the data distribution will be even more imbalance considering the increase in training data size of high resource languages is significantly larger than the lower resource languages (the training data sizes of last three languages are the same for all monolingual settings).\"}",
"{\"title\": \"Clarifying the misunderstanding in monolingual settings\", \"comment\": \"Thank you again for the feedbacks regarding our revisions. There is probably some misunderstanding in the previous revision and we also found some wrong configurations that caused the huge gap of monolingual CERs and the detailed comments are given below.\\n\\n- The main motivation of multilingual recognition is to recognize multiple languages with a single model. This not only saves the trouble of creating a separate phone set, language model, and decoder for each language for faster deployment and easier maintenance, the multilingual training will help the individual languages, especially the low-resource languages.\\n> **Your response:** The initial review did not intend to undermine the motivation of multilingual recognition. Rather I was suggesting a perspective that would highlight the contributions of this work. Specifically, it is able to improve performance on lower resourced languages by training with high resource languages. The approach is less compelling in its ability to recognize higher resourced languages due to the substantial degradation of performance on these.\\n>> ***Our new response:*** We appreciate your comments and suggestions. We will explain more in the following comments for the gap between the monolingual numbers in the first paper version and A2 systems as we have found that some higher resourced monolingual training in our original version is using a significantly larger training set than the multilingual training.\\n-As for the comparison of multilingual and monolingual performance, we would like to clarify that the token sets of the monolingual and multilingual are not the same. The token set for monolingual is generated only from the specific language and has a much smaller number of target labels. On the other hand, the multilingual training token set is generated from pooled texts from all languages. Thus, the complexity of the monolingual model is much less than the multilingual model.\\n-The monolingual token set has 150 tokens per language, whereas, for multilingual training, there are more than 5K tokens in total (see Table 8). For example, for \\u201cen,\\u201d there are 243 tokens, and \\u201cfr\\u201d has 382 tokens.\\n-To make sure, we performed a fair comparison for our multilingual model to the monolingual model after re-training all monolingual models with the same token set as the multilingual setting (5K). We revised the numbers for monolingual in Table 1 to avoid confusion. We found that the gap between monolingual and multilingual models for \\u201cen\\u201d is very small (21.6 vs. 22.0), and for \\u201cfr,\\u201d A2 improves from 19.8 to 17.7 CER. We reported these numbers in Table 1 of the revised paper. Lastly, the new results demonstrate the advantages of the A2 framework by improving all languages compared to the monolingual models (except for a small degradation of performance for en from 21.6 to 22.0).\\n> **Your response:** I find this decision to be very troubling. Why should monolingual models be trained to multilingual targets? The baseline that was included in the original paper -- monolingual acoustic modeling to monolingual sentence piece targets -- was appropriate. Part of the \\\"cost\\\" of developing a multilingual model is the complexity of needing to recognize multiple languages. This modified baseline is suggesting that monolingual models (despite being trained only on a single language) should be able to recognize out of language targets, but also must learn that they are out of language. This is a remarkable requirement for monolingual training.\\n>> ***Our new response:*** Thanks again for the prompt and insightful comments. There might be some misunderstanding here. By \\u201cmonolingual models be trained to multilingual targets,\\u201d we mean the current monolingual experiments use the subset of tokens from the 5K multilingual targets that belong to the particular language. For example, we used only 243 targets for \\u201cen\\u201d rather than the whole 5K plus targets. Therefore, the monolingual model is only trained and evaluated on the 243 \\u201cen\\u201d token set, and there are no \\u201cout of language\\u201d targets.\"}",
"{\"title\": \"Troubling modification of baseline performance\", \"comment\": \"Responses inline.\\n> * The main motivation of multilingual recognition is to recognize multiple languages with a single model. This not only saves the trouble of creating a separate phone set, language model, and decoder for each language for faster deployment and easier maintenance, the multilingual training will help the individual languages, especially the low-resource languages.\\n\\nThe initial review did not intend to undermine the motivation of multilingual recognition. Rather I was suggesting a perspective that would highlight the contributions of this work. Specifically, it is able to improve performance on lower resourced languages by training with high resource languages. The approach is less compelling in its ability to recognize higher resourced languages due to the substantial degradation of performance on these.\\n\\n> * As for the comparison of multilingual and monolingual performance, we would like to clarify that the token sets of the monolingual and multilingual are not the same. The token set for monolingual is generated only from the specific language and has a much smaller number of target labels. On the other hand, the multilingual training token set is generated from pooled texts from all languages. Thus, the complexity of the monolingual model is much less than the multilingual model.\\n> * The monolingual token set has 150 tokens per language, whereas, for multilingual training, there are more than 5K tokens in total (see Table 8). For example, for \\u201cen\\u201d, there are 243 tokens, and \\u201cfr\\u201d has 382 tokens.\\n> * To make sure, we performed a fair comparison for our multilingual model to the monolingual model after re-training all monolingual models with the same token set as the multilingual setting (5K) and revised the numbers for monolingual in Table 1 to avoid confusion. We found that the gap between monolingual and multilingual models for \\u201cen\\u201d is very small (21.6 vs. 22.0), and for \\u201cfr\\u201d, A2 improves from 19.8 to 17.7 CER. We reported these numbers in Table 1 of the revised paper. Lastly, the new results demonstrate the advantages of the A2 framework by improving all languages compared to the monolingual models (except for a small degradation of performance for en from 21.6 to 22.0).\\n\\nI find this decision to be very troubling. Why should monolingual models be trained to multilingual targets? The baseline that was included in the original paper -- monolingual acoustic modeling to monolingual sentence piece targets -- was appropriate. Part of the \\\"cost\\\" of developing a multilingual model is the complexity of needing to recognize multiple languages. This modified baseline is suggesting that monolingual models (despite being trained only on a single language) should be able to recognize out of language targets, but also must learn that they are out of language. This is a remarkable requirement for monolingual training. \\n\\nI would like to avoid assigning intention to this modification, and I would hope that it is in the interest of a more transparent understanding of model behavior rather than seeking to present the proposed approach in a more positive light by reporting worse baseline performance.\\n\\nI can't think of any good reason to *remove* the initial monolingual baseline numbers. If these results -- monolingual modeling to multilingual targets -- are a useful point of comparison between the monolingual baseline and the A2 results, then they should be included *as well*, but not *instead of* the monolingual baseline. The author's could then attribute the monolingual regressions to the difficulty of making predictions to a larger and more complicated target set. The original observation that A2 is substantially worse than monolingual training on 'en' and 'fr' should remain. \\n\\n> * While the best results in the revised Table 1 show some degradation of performance for \\u201cen\\u201d (21.6 vs. 22.0) with the same multilingual token set, the comparison of SMT and A2 is not fair. SMT is trained with random sampling, and A2s are trained with balanced sampling. A2 should be compared with the balanced sampling version of SMT given in the BS result row.\\n\\nI disagree with the claim that the comparison between SMT and A2 is \\\"not fair\\\" because of their training strategy. While, yes, the balanced sampling exasperates the differences between high and low resourced languages, there is no requirement or expectation for SMT to follow the A2 sampling strategy. It is a fair comparison to say that SMT is a reasonable multilingual baseline, and A2 should be attempting to surpass its performance, not surpass a weakened version of it. That said, the ablation results in Table 2 are clearly informative to how the Adapt step helps mitigate the impact of balanced sampling, and lead to the conclusion that A2 is able to provide further improvements to lower resourced languages. However, these improvements do come at a modest cost to higher resourced languages (as demonstrated by the comparison to SMT).\"}",
"{\"title\": \"General Response to All Reviewers\", \"comment\": [\"We have uploaded a new version of our paper to address reviewer concerns. We thank all reviewers for the comments and feedback to improve our paper. Here are the summary of the changes:\", \"Revised the paper considerably in terms of writing, including more precise word usages, fixing grammar errors, fixing typos\", \"Clarified the long-tail problem and added a figure; see Figure 1 and the introduction's second paragraph.\", \"Corrected Eq 6.\", \"Added description of smoothing based on the reviews\\u2019 suggestions.\", \"Moved mBERT picture to the main paper as suggested by many reviewers\", \"Added model parameters in the experimental result tables\", \"Added transcription examples and analysis in Appendix E.\", \"Add more baselines and experiments to make the study more comprehensive.\", \"Added a new baseline as suggested by R3, language conditioning with one-hot language vectors (LID) from Li, et al. [1], see Table 1\", \"Added a new ablation subsection to show the effect of $\\\\tau$ in the imbalance class adjustment in Appendix D\", \"Retrained monolingual models with the same vocabulary as multilingual models for a fair comparison. We also presented the training data sizes in Table 1 to explain the gaps in the CER performance of A2 compared to the best monolingual systems, which are trained on a much larger dataset.\", \"Added a set of new adapters for language groups suggested by R1 by allowing languages within the same language group to share the same language adapters. Detailed experiments results and analysis of language groups are in the ablation study 3.3.2\", \"Showed A2 can avoid the model overfitting to the tail languages under the balanced sampling.\", \"A more advanced pretrained language model XLM-R suggested by R5 is used in place of the distilled-mBERT to study the impacts of pretrained languages; see Table 2.\", \"[1] Kannan, et al. Large-scale multilingual speech recognition with a streaming end-to-end model. Interspeech.\"]}",
"{\"title\": \"Continued Responses to Reviewer 1\", \"comment\": \"**Q: What are head and tail languages?**\\n\\nThe head and tail here refer to the amount of training data for multilingual training. The head classes are tokens that have high frequency, otherwise they are classified as tail classes We have divided the languages into high, low, and intermediate resources in Table 1, and all the resource-rich languages can be viewed as the head languages, and the resource-poor languages can be viewed as the tail language. \\n\\n**Q: Figure 7 is in the appendix. The main content without the appendix should be as self-contained as possible.**\\n\\nThanks, we have moved Figure 7 to the main text as Figure 3.\\n\\n**Q: A natural adjustment is to scale the raw logits \\u2026 The term logit is misused.**\\n\\nWe define logits as a vector of raw (non-normalized) predictions, and we consistently use it throughout the paper. We remove \\\"raw\\\" from text to avoid any confusion. Also, we have revised Section 2 to remove the confusion caused by using $y$ as both label and distribution.\\n\\n**Q: Typographical errors**\\n\\nThanks for the suggestion, and we have made the changes for the notation of T,F as well as Equation 6. We also modified the definition of $y$ in KL in Section 2 to remove the confusion. We correct the definition of $t$ in Subsection 2.3.\\n\\n**Q: equation (9) It is confusing to denote the probability as $y_t^{adj}$. Again, because the bold face y is used as a sequence of labels else where, such as equation (11).**\\n\\nWe have revised the paper to use y exclusively for labels.\\n\\n**Q: Gradient accumulation**\\n\\nWe train our model with a batch size of 32 and accumulate the gradient in two steps to have a larger batch size in a single GPU NVIDIA V100 16GB with Adam optimizer with a warm-up step of 25000.\\n\\n**Q: This is due to the human languages share some common sub-phonetic articulatory features (Wang & Sim, 2014) ...This sentence is ungrammatical. 2. This is a well-known fact, and the citation cannot be this recent. 3. No evidence in this paper is shown that this is the actual cause of the improvement. Please state it clearly if this is only a speculation.**\\n\\nWe have revised the paper accordingly and cite some previous papers.\\n\\n**Q: SMT models improve the performance of the low-resource languages significantly. This is not exactly true. For example, the performance on Mandarin actually degrades quite significantly.**\\n\\n- As for comparing multilingual and monolingual performance, we would like to clarify that the token sets of the monolingual and multilingual are not the same in the original paper. The token set for monolingual is generated only from the specific language and has a much smaller number of target labels. On the other hand, the multilingual training token set is generated from pooled texts from all languages.\\n- To make sure, we performed a fair comparison for our multilingual model to the monolingual model after re-training all monolingual models with the same token set as the multilingual setting (5K) and revised the numbers for monolingual in Table 1 to avoid confusion.\\n\\n**Q: Overfitting check. What's the performance on the training set?**\\n\\nWe performed model decoding on the training data of the tail language \\\"ky\\\".\\nThe BS model is clearly over-fitted to the tail languages due to up-sampling, for example, the CERs on the training data of \\\"ky\\\" and \\\"sv\\\" are significantly lower than the evaluation data (3.4\\\\% and 4.2\\\\% training vs 13.4\\\\% and 22.8\\\\% evaluation). Compared with BS, A2 also avoids over-fitting to the tail languages, CERs on \\\"ky\\\" and ``\\\"sv\\\" are 8.2\\\\% and 23.6\\\\%, much closer to evaluation CERs\"}",
"{\"title\": \"Responses to Reviewer 1\", \"comment\": \"Thank you for the insightful and detailed review. We have thoroughly read your review and have updated our paper to address all reviewer comments. And here, we would answer your questions and concerns.\\n\\n**Q: Long-tail problem is not properly defined. Is it a distribution that captures how likely a phone or a word piece is used in all of the world's languages?**\\n- Thank you for pointing this out. We realized that the problem is not properly defined due to the page limit. We added a paragraph in the introduction (second paragraph) to properly define the problem. We also added a figure (Figure 1) to illustrate the problem further.\\n- From the model perspective, the long-tail distribution refers to the skewed subword (word pieces) class distribution of the multilingual data from 11 languages studied in this paper. The skewed distribution is due to two levels of imbalances: the data distribution level and the subword distribution level. First, there are very limited audio samples available on low-resource languages, such as Kyrgyz, Swedish, and Turkish, while the high-resource language data, such as English, French, and Spanish, have vast amounts of data. Second, the distribution of the graphemes or subwords labels follows a long-tailed distribution in ASR since some labels appear significantly more frequent than other labels, even for a monolingual setting. We show in Figure 8 that even with the same amount of training data for each language, the distribution of the subwords of the multilingual token set is still long-tailed. Furthermore, a multilingual system may include languages with various writing scripts other than Latin alphabets, such as Chinese or Cyrillic, that eventually maximize the skewness.\\n\\n**Q: The smoothing technique does have an effect on generalizing to low frequency or even unseen tokens, but the paper does not mention the connection or cite the proper papers**\\nThanks for the suggestion. We added the statement in our revised paper as suggested and cited the relevant papers under Equation 2.\\n\\n**Q: I can understand using a larger language model would help the final performance, but how does this solve the long-tail problem and the class imbalanced problem?**\\n- The mBERT model and language adapters are employed to enhance the language and acoustic modeling capability for low-resource languages, respectively. The logit adjustment is to explicitly address the class imbalance problem (the distribution of class labels, i.e., multilingual word pieces) regardless of the amount of language resources by adjusting the class distributions. Logit adjustment is complementary to the other two techniques and can be easily applied to other tasks with long-tailed class distributions. We will make this clear in the revised paper.\\n- In addition, we don't think a larger language model will help the class imbalance problem. The language itself is inherently long-tailed if you consider all the letters or graphemes. For example, in English, the letter \\\"e\\\" appears much more frequently than the letter \\\"q\\\". Therefore, the resulting word pieces will also manifest such imbalance distributions. This can also be seen in the histogram of Figures 5 and 6. Even with equal numbers of language training data, the multilingual word pieces still have a long-tail distribution, which cannot be fixed by the language model alone.\\n\\n**Q: The relationships among languages are ignored?**\\nThanks for the suggestion for the relationships of languages. To take your advice and we are currently running more experiments by allowing languages within the same language group to share the same language adapters:\\n\\n- Grouped by language families\", \"romance_languages\": \"it fr es\", \"germanic_languages\": \"en nl sv\", \"turkic_languages\": \"tr tt ky\\nRussian ru\\nChinese zh\\n\\n- Grouped by written scripts:\\nChinese zh\", \"latin\": \"it fr es en nl sv tr\", \"cyrillic\": \"ru tt ky\\nWe will add this study in the revised paper when they are done.\"}",
"{\"title\": \"Responses to Reviewer 4\", \"comment\": \"Thank you for the insightful and detailed review. We have thoroughly read your review and have updated our paper to address all reviewer comments. And here, we would answer your questions and concerns.\\n\\n**Q: One way to mitigate this is to pose the problem not as solving universal, multilingual speech recognition, but rather improving performance specifically on tail languages through training on higher resource languages.**\\n- The main motivation of multilingual recognition is to recognize multiple languages with a single model. This not only saves the trouble of creating a separate phone set, language model, and decoder for each language for faster deployment and easier maintenance, the multilingual training will help the individual languages, especially the low-resource languages. \\n- As for the comparison of multilingual and monolingual performance, we would like to clarify that the token sets of the monolingual and multilingual are not the same. The token set for monolingual is generated only from the specific language and has a much smaller number of target labels. On the other hand, the multilingual training token set is generated from pooled texts from all languages. Thus, the complexity of the monolingual model is much less than the multilingual model. \\n- The monolingual token set has 150 tokens per language, whereas, for multilingual training, there are more than 5K tokens in total (see Table 8). For example, for \\u201cen\\u201d, there are 243 tokens, and \\u201cfr\\u201d has 382 tokens. \\n- To make sure, we performed a fair comparison for our multilingual model to the monolingual model after re-training all monolingual models with the same token set as the multilingual setting (5K) and revised the numbers for monolingual in Table 1 to avoid confusion. We found that the gap between monolingual and multilingual models for \\u201cen\\u201d is very small (21.6 vs. 22.0), and for \\u201cfr\\u201d, A2 improves from 19.8 to 17.7 CER. We reported these numbers in Table 1 of the revised paper. Lastly, the new results demonstrate the advantages of the A2 framework by improving all languages compared to the monolingual models (except for a small degradation of performance for en from 21.6 to 22.0). \\n\\n**Q: On average performance improves, but the improvement to lower resource languages comes at the cost of higher resource languages. Also A2 the proposed system on average does better than standard multilingual training, but only on the 9 lowest resource languages, on English and French A2 actually exacerbates this problem with these higher resource languages showing even larger regressions from monolingual modeling.**\\n- While the best results in the revised Table 1 show some degradation of performance for \\u201cen\\u201d (21.6 vs. 22.0) with the same multilingual token set, the comparison of SMT and A2 is not fair. SMT is trained with random sampling, and A2s are trained with balanced sampling. A2 should be compared with the balanced sampling version of SMT given in the BS result row. \\n- We can clearly see that balanced sampling helps the tail classes and hurts the head classes considerably. With the A2 framework, we can significantly reduce the gap and improve the performance of all languages compared to the multilingual training baseline. Alternatively, one can also compare \\u201cSMT\\u201d and \\u201cSMT+Adjust\\u201d performance in Table 2 to appreciate the advantages of A2 in helping improve all languages.\\n\\n**Q: Typographical error**\\nThank you for pointing this out. After careful examination and checking of our implementation, we revised the Equation 6 to $\\\\frac{C_i}{C} - \\\\frac{1}{(N-n_o)\\\\times C}$, while $c_i > 0$, $\\\\frac{1}{n_o \\\\times C}$, otherwise, in the revised paper.\\n\\n**Q: Minor comment: Figure 7 is mentioned in Section 2.3 but is only included in the Appendix. It would be clearer to either describe Figure 7 where it is first mentioned, or present this information in Section 2.3 as forward referring to Appendix material.**\\nWe moved the figure to the main paper. Thank you for your suggestion. We appreciate it.\"}",
"{\"title\": \"Responses to Reviewer 3\", \"comment\": \"Thank you for the insightful and detailed review. We would answer your questions and concerns as follows:\\n\\n**Q: In large scale models such as this, it is important to report the computation requirements of the model in addition to quality improvements, as often the quality grows with model size.**\\n\\nWe have added the parameters count in the Ablation Studies section (Tables 3, 4, and 5).\\n\\n**Q: Besides the ablation studies, there's not much to be learned on how the changes helped the quality.**\\n\\n- For logit adjustment, we showed several tau values for performance comparison in a new subsection 3.3.4. Also, we found that the training or inference phase logit adjustment cannot be applied together, which will break the probability distribution and yield significantly worse performance.\\n- Our initial attempt of mBERT with freezing self-attention (only cross-attentions are updated) yields even worse performance than the baseline random initialized decoder weights (We will put this observation in our revised paper). We have also replaced the mBERT with a more advanced XLM-R pretrained language model to study the impacts of the pretrained language models on the proposed A2 framework.\\n- We have added more experiments with language group adapters, where the languages of the same language group (e.g., \\u201ces\\u201d \\u201cit\\u201d \\u201cfr\\u201d of the Romance language) share the same adapter parameters to study whether a more robust language adapter can be obtained with more training data and whether it can benefit each language of the group. The experiments and discussions are presented in Table 4.\\n\\n**Q: Also there should be more competing baselines to consider, other than the adapter layers of Kannan et al.**\\n\\nWe built the language ID injection (LID) system from Li et al for comparison. We show the results in Table 1. We found that our A2 model outperforms LID (16.0 vs. 17.1 CER), and if word piece class imbalance adjustment is applied, a significant CER improvement is achieved (16.6 CER).\\n\\n**Q: It's quite unclear what the long tail refers to in this paper. Does it refer to the languages that have little data? Or does it refer to words that are rare or often misclassified? Most of the paper leads me to believe in the former, but figures the appendix leads me to believe in the latter since the histograms are so dense.**\\n\\nFrom the model perspective, the long-tail distribution refers to the skewed sentence piece classes of the multilingual data. The skewed distribution stems from two levels of imbalances: the language training data size (number of training samples) and the sentence piece distribution (number of tokens) level. To better solve the long-tail problem, we need to 1) model the low resource languages robustly 2) address the sentence piece class long-tail distribution properly.\\nWe show in Figure 8 that even with the same amount of training data for each language, the distribution of the subwords of the multilingual token set is still long-tailed. We have made this clear in the introduction section of the revised paper. In addition, we have adjusted the histogram plots to make them less dense.\\n\\n**Q: There's a lack of specific examples that illustrate how the incorporation of the various techniques in this paper show an improvement in the transcription. Showing specific transcriptions would be convincing in terms how showing the wins from these techniques.**\\n\\nWe added examples in the Appendix E in the revised paper.\"}",
"{\"title\": \"Responses to Reviewer 2\", \"comment\": \"Thank you for the constructive review. We have updated our paper to address your comments. We would answer your questions and concerns as follows:\\n\\n**Q: The three techniques alone are not novel enough, and each is proposed by previous works. E.g., initialized with a pre-train language model, class imbalance adjustment, and language-specific adapters, which are similar to a mixture of language experts.**\\n\\n- Thanks for the comment. Initialization with a pre-trained language model is not new in NLP, but to our best knowledge, we are the first who propose this to the speech recognition task, and it\\u2019s non-trivial to get it to work for ASR. Class imbalance adjustment was mainly addressed in the computer vision tasks, which is a classification task. In this paper, we apply the class imbalance adjustment to a sequential classification task, i.e., ASR, with an autoregressive decoder.\\n- Language-specific adapters are not the same as language experts. Adapters are light-weight parameters that are added to encoder and decoder layers to learn language-specific information. They reside in a single model and can be easily replaced with new tasks without changing the other model parameters. On the other hand, to our understanding, a mixture of language experts are basically utilizing multiple encoders or decoders and mix the output of the experts. Language-specific adapters were firstly used by Google in paper [1], here we improve the adapters by adding a common adapter (the Dual-Adapters) to encode the shared knowledge between languages.\\n\\n[1] Kannan, et al. Large-scale multilingual speech recognition with a streaming end-to-end model. Interspeech.\\n\\n**Q: The proposed method can hardly be called as a framework since it has not demonstrated its necessity and applicability for each component. In another view, it is more like an assemble of different improvement tricks without much-centralized logic towards a dedicated and focused problem.**\\n\\n- We want to emphasize that the primary problem we are addressing is to train a robust multilingual ASR with a single end-to-end model to improve the recognition of low-resource languages while keeping the recognition performance for the high-resource languages, compared to the monolingual models or the standard multilingual training.\\nThe first challenge we addressed is the long-tail class distribution problem, where we have demonstrated the advantages of applying the logit adjustment alone in table 2. Logit adjustment is a must-have component to address the long-tail label distribution problem.\\n- Another challenge from multilingual ASR is the lack of training data for certain languages. To solve the data scarcity problem, pre-trained mBERT is used for better language modeling, the Dual-adapters are used for better acoustic modeling by adapting the models learned from the pooled training data to the low-resource languages.\\n\\n**Q: The effectiveness of a component (mBERT) need to depend on other components; otherwise, it does not work. This makes the proposed method not generalizable. Why mBERT is only effective when coupled with others?**\\n\\nSince mBERT is trained on the text data only, the text space may not be consistent with the transcriptions of the ASR training data. In order to use it for the ASR decoder, its text space needs to be aligned with the acoustic space in the ASR encoder. We believe the reason why it needs to be coupled with others is that the better acoustic models with dual-adapters or logit adjustment help the alignment between the text space of mBERT with the acoustic space with the encoder via cross-attentions. Nevertheless, the improvement is not as significant as the other two techniques, namely dual adapter and logit adjustments. Therefore, mBERT is probably the only component that can be replaced for the sake of model size and computation requirements.\\n\\n**Q: Why not initialize from GPT model or more appropriate from sequence to sequence pre-trained models with a cross-attention module such as MASS or BART?**\\n\\nThanks for the comment, as presented in the previous comment, the improvement of mBERT is not as significant as the other two techniques. Although using larger models like mBART would be more principled, it increases the model size drastically, which will require a much larger GPU memory size and slow down the training significantly. Most importantly, it will affect the decoding speed, making it less practical. In fact, we did try using mBART, unfortunately, the model can barely fit into our GPU memory, and we can only use a batch size of 2. Considering the improvement is small, we did not proceed. Nevertheless, we have trained a new model with a more advanced pretrained language model XLM-R as the decoder to show the effect of a bigger pre-trained language model in the revised paper. The results are presented in Table 2, adding the more advanced pre-trained language model does not provide ASR performance gain in terms of CER.\"}",
"{\"title\": \"Responses to Reviewer 5\", \"comment\": \"Thank you for the comments. We have updated our paper to address your comments. We would answer your questions and concerns as follows:\\n**Q: The framework combines many techniques together and it is hard to tell if any one of those is the 'silver bullet'.**\", \"our_three_technical_contributions_include\": \"1. use of a pre-trained multilingual language model,\\n2. dual adapters, and\\n3. logit adjustments were used to improve the multilingual ASR.\\nOur studies showed that all three approaches complement each other. The first two techniques are employed to enhance the language and acoustic modeling capability for low-resource languages. Comparing these two techniques, we found that dual adapters are more effective than pre-trained language models (BS + mBERT vs. BS + Dual-Adapters). The third technique, the logit adjustment, addresses the class imbalance problem (the distribution of class labels, i.e., multilingual word pieces) regardless of the amount of language resources. Logit adjustment is complementary to the other two techniques and can be easily applied to other tasks with long-tailed class distributions. To sum up, dual adapters and logit adjustments are the two most important techniques for the success of the A2 framework, while the improvement of the mBERT language model comes at the cost of larger models and heavier computations.\\nWe have discussed the effectiveness of each technical contribution in our Ablation Studies section. \\n\\n**Q: Some design/hyperparameter choices are rather magical.**\\nWe use a fixed training hyperparameter set in our speech recognition model to have a fair comparison. We also added a new subsection to describe better how we find the best hyper-parameter $\\\\tau$ for our class imbalance adjustment.\\n\\n**Q: Why did you choose to use distill-mBERT over other alternatives (mBERT, XLM etc.)? Would you expect more gain if using a larger model such as XLM-R?**\\nDistilled-mBERT was chosen for its smaller size and decent multilingual language modeling performance to speed up the experiments since we are dealing with 11 languages. Our studies showed that the improvement of distill-mBERT (BS+Dual-Adapters and BS+Dual-Adapters+mBERT in Table 2) is only achieved if better acoustic models are used. Although a pre-trained language model has a better language generation capability, for speech recognition applications, aligning the acoustic and text space is also crucial. Considering the improvement of the distill-mBERT is not as significant as the other two techniques, we think with a larger model like mBERT, XLM-R, the performance gain over the current system with distilled-mBERT will not be significant due to the two considerations: \\nwith increased model parameters, it will be even more demanding in aligning the text and acoustic space, there will be a vast increase in computation (4x for XLM-R base) for training, and the decoding speed will also suffer significantly. \\n\\nWe have trained a model with an XLM-R decoder and compared it with the mBERT results. Our conjectures were verified in Table 2 as XLM-R does not provide ASR performance gain compared to distilled-mBERT.\\n\\n**Q: Negative interference can impact low-resource languages in multilingual models. However, it seems like the opposite is true here: multilingual models can improve even high-resource languages (e.g. IT). Do you have any idea why?**\\nFor speech recognition, different languages share similar articulatory features (sub-phonetic) on how a phoneme is produced, for example, place of articulation, production manner, etc. Multilingual training encourages the model to learn such low-level features, which may be beneficial to all languages, including the resource-rich languages. In addition, languages within the same language group, e.g., \\u201cit\\u201d (Italian) and \\u201ces\\u201d (Spanish), share even more similarities in terms of vocabulary, pronunciation, and grammar. The additional data from the same language group can further improve the performance of a language. For \\u201cit\\u201d, 20 hours of data is not good enough for a decent ASR system, and we categorized it as an intermediate language. Our results in Table 1 show that our A2 system can improve all languages compared to the baseline multilingual training with balanced sampling, thanks to the logit adjustment.\"}",
"{\"title\": \"Simple method with comprehensive experiments\", \"review\": \"This paper studies multilingual ASR with a focus on the long tail problem. A new method using dual adapters is proposed. Although there are several ingredients of the method, their effectiveness are all verified in detailed ablation studies. Therefore, I believe the results shown in this paper are valuable for future work.\", \"pro\": \"1. The structure of dual adapters is novel.\\n2. To the best of my knowledge, this is the first work to verify the effectiveness of pretrained models in multilingual ASR.\\n3. The paper contains detailed experiments.\", \"con\": \"1. The framework combines many techniques together and it is hard to tell if any one of those is the 'silver bullet'.\\n2. Some design/hyperparameter choices are rather magical.\", \"questions\": \"1. Why did you choose to use distill-mBERT over other alternatives (mBERT, XLM etc.)? Would you expect more gain if using a larger model such as XLM-R?\\n2. Recent work [1] shows negative interference can impact low-resource languages in multilingual models. However, it seems like the opposite is true here: multilingual models can improve even high-resource languages (e.g. IT). Do you have any idea why?\\n\\n\\n[1] On negative interference in multilingual models: findings and a meta-learning treatment. Wang et al., EMNLP 2020.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"A mix of tricks for an important problem\", \"review\": [\"This paper aims to improve multilingual speech recognition on common voice, which contains 18 languages, some of which have little data (which the authors here refer to as the long-tail languages I believe). The problem of multilingual ASR is both a practical one as well as a challenging one from the perspective of multitask learning and fairness, and I'm happy to see work in this area.\", \"The paper proposes 3 techniques that together result in a modest improvement over the baseline on common voice. The 3 include logit re-balancing based on class priors, fusion of a BERT-based language model, and the use of a common and langauge-specific adapter layer in parallel. All of these techniques have been previously explored in slightly different forms for speech problems. They have not been combined in this way before though. To my knowledge, the logit adjustment has not been applied to the long-tail problem in speech recognition.\", \"Pros\", \"Addresses an important problem in ASR\", \"Overall, A2 improves over the baseline of balanced sampling by an average of 1% absolute CER, or a relative improvement of 6%. That is a moderate improvement but worthwhile enough to report.\", \"Introduces class-based logit adjustment to the problem of long tail\", \"Introduces minor tweaks that lead to improvement, and presents ablation study\", \"Cons\", \"In large scale models such as this, it is improtant to report the computation requirements of the model in addition the to quality improvements, as often the quality grows with model size. Here there are no comparisons of parameter count here\", \"Besides the ablation studies, there's not much to be learned on how the changes (dual adapter, logit adjustment, or the way mbert is fused) helped the quality. It would be nice to report a few failed versions that the authors tried to learn more about what works and what doesn't.\", \"Overall the changes do not improve significantly over the baseline. Also there should be more competing baselines to consider, other than the adapter layers of Kannan et al. There's the multi-headed decoder approach of Pratap et al. or the language ID injection approach of Li et al. \\\"Multi-Dialect Speech Recognition with a Single Sequence-to-Sequence Model\\\".\", \"It's quite unclear what the long tail refers to in this paper. Does it refer to the languages that have little data? Or does it refer to words that are rare or often misclassified? Most of the paper leads me to believe in the former, but Figures 5 and 6 in the appendix lead me to believe in the latter since the histograms are so dense.\", \"There's a lack of specific examples that illustrate how the incorporation of the various techniques in this paper show an improvement in the transcription. Showing specific transcriptions would be convincing in terms how showing the wins from these techniques...\"], \"other_comments\": \"What is meant by the fourth bullet point in the contributions? Is there a new dataset? I do not understand the contribution \\n\\nThe use of previous tokens as input, i.e. not using teacher forcing, during the later stages of training (Eq. 10) is unconventional. It would be more convincing if the author discussed this a little more, including why it improves quality.\\n\\nIt's unclear how x_{CTC} is defined in fig 1. Is it the output of the encoder?\\n\\nLikewise it's unclear how the function f is defined in fig 1. Is it the same function and weights (assuming a linear transformation from the previous layer) for f(x_CTC) and f(y'_ATTN, h_enc)?\\n\\nFig 7 and comments to it should be moved to the main paper. It is essential for understanding of how mbert is integrated into the decoder as that is a big part of the contribution.\\n\\nThe grammar throughout the document is occasionally off which distracts from the content. Needs polish.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Reasonable approach but unconvincing\", \"review\": \"This paper addresses multi-lingual speech synthesis, where one ASR model is responsible for recognizing speech in multiple languages. In this example the authors look at 11 languages with between 80 and 4 hours of training data. The \\\"long-tail problem\\\" (which isn't clearly stated) that this work is addressing is that the discrepancy in available training data leads to a discrepancy in performance. The paper sets out two goals 1) \\\"to improve the overall performance of multilingual ASR tasks\\\" and 2) (implicitly) to flatten the distribution across languages.\\n\\nA major challenge in multilingual (or multidomain or multitask) modeling like this is that improvements to the tail often come with degradation at the head. This work demonstrates this phenomenon clearly. On the largest languages, English performance degrades from 13.3 to 22.0 and French from 11.5 to 17.7, while on the smallest languages, Kyrghyz improves from 30.0 to 12.1 and Swedish improves from 56.1 to 21.3. While the language average performance improves from 22.3 (monolingual) to 16.0 (proposed multilingual) it is not at all obvious that there is an application setting where this is clearly preferable. One way to mitigate this is to pose the problem not as solving universal, multilingual speech recognition, but rather improving performance specifically on tail languages through training on higher resource languages. If the authors were to focus on improving performance on the 8 languages with 20h or less training data, while including English (en) French (fr) and Spanish (es), but not actually caring whether the high resource languages are improved by multilingual modeling, the results here would be much more compelling. As written the story is somewhat muddled: On average (where average is taken over language, rather than, say expected usage or the system, or population, etc.) performance improves, but the improvement to lower resource languages comes at the cost of higher resource languages. Also A2 the proposed system on average does better than standard multilingual training, but only on the 9 lowest resource languages, on English and French A2 actually exacerbates this problem with these higher resource languages showing even larger regressions from monolingual modeling.\\n\\nImplicit in this approach and task is a desire for the distribution of performance across languages to be more consistent. I would recommend making this explicit and providing some measure of variance as well as average across languages. This could be standard deviation (if there is a belief that the performance is normally distributed) or an entropy measure. But it would provide another dimension over which to optimize when understanding tail performance.\\n\\nI believe there is a typo or error in Equation 6. First, there are mismatched subscripts for \\\\pi_y and c_i. I believe this should be \\\\pi_i or c_y. Second consider a distribution with three classes and label counts of c = [1, 0, 0], so C=1, n_0 = 2 and N = 3. Equation 3 would result in \\\\pi = [1/1 - 1/(2*1), 1/1, 1/1] = [1/2, 1, 1] which is not a valid distribution.\", \"minor_comment\": \"Figure 7 is mentioned in Section 2.3 but is only included in the Appendix. It would be clearer to either describe Figure 7 where it is first mentioned, or present this information in Section 2.3 as forward referring to Appendix material.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"A large disconnect between the proposed additions and the problems the paper tries to solve\", \"review\": \"The paper proposes three additions to improve a monolithic multilingual end-to-end ASR system. The problem of training a monolithic multilingual ASR system is that using data from multiple languages does not necessary improve over individual monolingual systems. The three additions are a large multilingual language model, the use of language adapters, and smoothing on the token probabilities. Mixing the three additions in a specific way helps improve the average word error rates.\\n\\nThere are two major problems in the paper. One is the imprecise use of words, and the other is the disconnect between the additions and the problems they try to solve. Details are as follows.\\n\\nThe paper contains a lot of imprecise use of words. The term \\\"long tail\\\" is used throughout the paper, but it is never clearly defined. The long tail of a distribution refers to a significant total amount of probability mass spread on a large support. In the context of this paper, when the paper talks about the long-tail problem, what distribution are we talking about? Is it a distribution that captures how likely a phone or a word piece is used in all of the world's languages?\\n\\nWhile the long-tail problem is not properly defined, the class imbalance problem more or less is. There is still a certain amount of ambiguity. For example, what are the classes? Are the classes languages, phones, or word pieces?\\n\\nGiven that the long-tail problem is not defined, it is hard to see why the proposed additions solve the problem. I can understand using a larger language model would help the final performance, but how does this solve the long-tail problem and the class imbalanced problem? The same applies to language adapters. The smoothing technique does have a effect on generalizing to low frequency or even unseen tokens, but the paper does not mention the connection or cite the proper papers.\\n\\nThe paper also ignores the relationships among languages. For example, it is obvious that none of the word pieces in Mandarin are shared with the other languages. It is also the only tonal language. As another example, Tatar is Turkic but uses the Cyrillic script; Turkish is also Turkic but it uses the Latin alphabet; Russian is not Turkic but uses the Cyrillic script. These relationships are important in interpreting the results when training multiple languages together.\\n\\nHere are a list of detailed comments.\\n\\n> x \\\\in R^{T,F}\\n\\nT,F is a rather unconventional notation. I would suggest T \\\\times F.\\n\\n> KL(y_{ATTN} || y)\\n\\nAre the y's labels? This is also an unconventional (if not wrong) notation. It should be the the KL of distributions, not labels. Later on, for example in equation (3), y is used as labels.\\n\\n> equation (3)\\n\\n\\\\mathcal{Y} is undefined.\\n\\n> Figure 7 depicts ...\\n\\nFigure 7 is in the appendix. The main content without the appendix should be as self-contained as possible.\\n\\n> Let t denote the current time step.\\n\\nThis is confusing. It's actually not the time in the actual speech, but the t-th token.\\n\\n> A natural adjustment is to scale the raw logits ...\\n\\nThe term logit is misused. Please look it up, stop misusing it, and define the symbols properly.\\n\\n> equation (6)\\n\\nThe symbol * should really be \\\\times.\\n\\n> equation (9)\\n\\nIt is confusing to denote the probability as y_t^{adj}. Again, because the bold face y is used as a sequence of labels else where, such as equation (11).\\n\\n> ... and 2 times gradient accumulation in a single GPU ...\\n\\nWhat does this mean exactly? Please elaborate.\\n\\n> This is due to the human languages share some common sub-phonetic articulatory features (Wang & Sim, 2014) ...\\n\\n1. This sentence is ungrammatical. 2. This is a well-known fact, and the citation cannot be this recent. 3. No evidence in this paper is shown that this is the actual cause of the improvement. Please state it clearly if this is only a speculation.\\n\\n> ... even MT models improve the performance of the low-resource languages significantly.\\n\\nThis is not exactly true. For example, the performance on Mandarin actually degrades quite significantly.\\n\\n> ... compared to the MT, the tail classes ... However, the head classes suffer ...\\n\\nAre the terms tail classes and head classes defined?\\n\\n> ... and possibly model overfitting to the tail classes.\\n\\nThis is easy to check. What's the performance on the training set?\\n\\n> The gains ... of the head languages, although tail languages ...\\n\\nAgain, what are head and tail languages?\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review comments for paper 3487\", \"review\": \"This paper proposes an Adapt-and-Adjust framework to address the long-tail problem in multilingual ASR, which assembles three techniques: 1) leveraged a pre-trained model mBERT to initialize the decoder, 2) language-specific and language-agnostic adaptors, 3) class imbalance adjustments. Experiments on a multilingual ASR with 11 languages demonstrate the proposed method can achieve accuracy improvements.\\n\\nOverall this paper is clearly written and easy to follow. Each technique is presented with details and evaluated with corresponding ablation studies. It is a good paper in terms of application, experiments and systematic engineering efforts. However, I have several concerns on the overall novelty and technical contributions: \\n1) The three techniques alone are not novel enough, and each is proposed by previous works. E.g., initialized with a pre-train language model, class imbalance adjustment, and language-specific adaptors which are similar to mixture of language experts. \\n2) The proposed method can hardly be called as a framework since it has not demonstrated its necessity and applicability for each component. In another view, it is more like an assemble of different improvement tricks without much centralized logic towards a dedicated and focused problem. \\n3) The effectiveness of a component (mBERT) need to depend on other components, otherwise it does not work. This makes the proposed method not generalizable. Why mBERT is only effective when coupled with others? Is it necessary? Is the improvement by chance but not universal? \\n4) Initializing from mBERT (trained with MLM) but adjusting to autoregressive generation would harm the model capability of mBERT. Why not initialize from GPT model or more appropriate from sequence to sequence pre-trained models with an cross-attention module such as MASS or BART? This would be more effectiveness than simply using mBERT.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
awnQ2qTLSwn | Learning to Share in Multi-Agent Reinforcement Learning | [
"Yuxuan Yi",
"Ge Li",
"Yaowei Wang",
"Zongqing Lu"
] | In this paper, we study the problem of networked multi-agent reinforcement learning (MARL), where a number of agents are deployed as a partially connected network. Networked MARL requires all agents make decision in a decentralized manner to optimize a global objective with restricted communication between neighbors over the network. We propose a hierarchically decentralized MARL method, \textit{LToS}, which enables agents to learn to dynamically share reward with neighbors so as to encourage agents to cooperate on the global objective. For each agent, the high-level policy learns how to share reward with neighbors to decompose the global objective, while the low-level policy learns to optimize local objective induced by the high-level policies in the neighborhood. The two policies form a bi-level optimization and learn alternately. We empirically demonstrate that LToS outperforms existing methods in both social dilemma and two networked MARL scenarios. | [
"agents",
"marl",
"global objective",
"neighbors",
"share",
"reinforcement learning",
"network",
"ltos",
"share reward",
"policy"
] | Reject | https://openreview.net/pdf?id=awnQ2qTLSwn | https://openreview.net/forum?id=awnQ2qTLSwn | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"Cxxy7oUT1n",
"f9TS3KA-XLG",
"x7ZgcFBCeKb",
"hp8I06MOTMh",
"C6-usLCnHCO",
"BFC28lltF-e",
"4aEJ6A6IvBw",
"NcVed6uL74f",
"-JHJijUyJE4",
"60wI5bcMhON",
"gvJLIDDlYrL",
"b9O_b_yBHBo",
"2J1Lj5YxCeI"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040437864,
1606034724039,
1606034517624,
1606034456323,
1606034424303,
1606034349674,
1606034213340,
1606034153891,
1604681537175,
1603868254630,
1603772171075,
1603274716340,
1603258408339
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3486/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3486/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3486/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3486/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3486/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3486/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3486/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3486/AnonReviewer5"
],
[
"ICLR.cc/2021/Conference/Paper3486/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3486/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3486/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3486/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"Although there was some initial disagreement on this paper, the majority of reviewers agree that this work is not ready for publication and can be improved in various manners. After the discussion phase there is also serious concern that the experiments need more work (statistically), to verify if they hold up. More comparisons with baselines are required as well. The paper could also be better put in context with the SOTA and related work. The paper does contain interesting ideas and the authors are encouraged to deepen the work and resubmit to another major ML venue.\"}",
"{\"title\": \"To all the reviewers\", \"comment\": \"We thank all the reviewers for the insightful comments. For the main concerns regarding CTDE methods, we have revised our claims in the context of literature and performed additionally experiments of CTDE (i.e., QMIX) in traffic and routing. The results show QMIX does not perform well. Please refer to the revision for details.\"}",
"{\"title\": \"Responses to Reviewer 5\", \"comment\": \"> \\\"it isn't obvious why and how relevant the cited works are... mentioning VDN, QMIX, and QTRAN (which together are some of the latest works in the factorization methods) does not seem to serve any further purpose, as they are no longer compared quantitatively or qualitatively to LToS... there appears to be no evidence whatsoever presented in the latter sections of the paper to show, let alone prove, the superior scalability of LToS.\\\"\\n\\nWe position LToS at a particular line of research on networked MARL, where agents form a graph, have restricted communication (limited to neighboring agents), and cooperate on the objective of maximizing the average of cumulative rewards of all agents, following the setting of Zhang et al. (ICML\\u201918), Qu et al. (NeurIPS\\u201919), Chu et al. (ICLR\\u201920), Qu et al. (NeurIPS\\u201920). Actually, we do not argue that our method is very scalable while these factorization methods are totally not. We have revise the claims to make them precise and clear.\\n\\nNetworked MARL focuses on decentralized learning as well as maximizing average return of all agents, while factorization methods are centralized training and focus the case all agents share a reward. Factorization methods can certainly be applied networked MARL. But in the literature it is empirically shown that QMIX performs poorly in large-scale networked MARL (Qu et al., 2020a). We also additionally performed the experiment of QMIX in 1) *traffic*, and it shows QMIX does not perform well as illustrated in Figure 4 and Table 2, and 2) *routing*, and it shows QMIX does not perform well either as illustrated in Figure 7 and Table 4.\\n\\n> Furthermore, some of the cited works have been left out at the evaluation stage, which leaves the reviewer puzzled as to which baselines LToS really hopes to outshine. The work needs some justification over why the following studies have not been compared to in the evaluation\\n\\nThe main strength of LToS lies in \\\"its capability to resolve selfishness and assign credits appropriately to bring about a harmonious cooperation in social dilemmas\\\". LToS aims to bring a harmonious cooperation by reward sharing in networked MARL. Therefore, we compared LToS to the methods for networked MARL, such as ConseNet and NeurComm. Moreover, as the communication is limited in neighborhood, the methods of communication are not quite related. Additionally, DGN (Jiang et al. ICLR\\u201920) is employed to properly handle the communication within neighborhood. That also makes us free from comparison with some methods of communication like CommNet (Sukhbaatar et al. NeurIPS 2016), because DGN already showed its advantage over CommNet by experiments when proposed.\\n\\nEccles et al. (CoRR 2019) introduced two types of agents: innovator and imitator. There is an intrinsic reward added to the environment reward, so it is still one of the approaches that rely on hand-crafted reward designs, as summarized in Related Work. Moreover, the imitator needs to use the action of the innovator at each timestep to compute the intrinsic reward, which however is unrealistic in practice as they admitted in the paper.\\n\\nFor BAD, all the experiments are performed in two-play cooperative games. It is non-trivial to extend it to more than two players. Besides, its hierarchical mechanism requires global information which is not realistic in our scenario.\\n\\nHostallero et al. (AAMAS 2020) aim at maximizing social welfare, too. But unlike our work, they simply use temporal difference error for reward shaping instead of real reward sharing, and there is no explicit optimization for social welfare.\\n\\nYang et al. (ICML 2018) just use mean-field method and neighbors' information to guarantee scalability and convergence to Nash equilibrium, but their goals don't contain global return optimization. Besides, DGN already showed its advantage over their MFQ by experiments when proposed.\\n\\n> Synchronization is definitely not cost-free; all the more so if the synchronized RNG is used to sample an experience from the agents' replay buffers. How do the agents synchronize their RNG in a decentralized manner?\\n\\nAs agents cooperate on maximizing social welfare (not competitive setting), they can simply use a pre-defined RNG. Or, do we misunderstand your question?\\n\\n> In the Routing evaluation. has overhead been taken into account? How does LToS fare with respect to varied communications channel? What if the network were sparser? Do you observe any trends as you vary the extent of network connectivity?\\\"\\n\\nRouting is a very complex problem and we only test LToS in a simplified scenario. We did not investigate varied communication channel of backbone network and network connectivity. More thorough investigation in routing will be considered in future work.\"}",
"{\"title\": \"Responses to Reviewer 3\", \"comment\": \"> At the end of Introduction, the sentence \\u2018LToS is easy to implement and currently realized by DDPG\\u2026\\u2019 can be misleading because of the word \\u2018realized\\u2019 and the fact that authors argue that LToS is a newly proposed method. Does this mean LToS simply combines DDPG and DGN? Do Figure 5 and 6 represent selfishness of agents when LToS is used?\\n\\nWe consider our LToS more of a new hierarchical MARL framework than a new method, so it is not restricted to DDPG+DGN but can be realized by diverse combinations of methods. It is just \\\"currently realized\\\" by DDPG+DGN in the experiments. Yes, Figure 5 and 6 are trying to represent temporal and spatial pattern of agents' selfishness of LToS.\"}",
"{\"title\": \"Responses to Reviewer 2\", \"comment\": \"> Why is this single-hop sharing effective in the experiments? Is it because of domain-specific reasons, or it is because that single-hop sharing is in principle equally effective, why?\\n\\nThere is an implicit assumption in networked MARL that each agent can only perform single-hop communication in one timestep. That is, two-hop communication requires two timesteps, which could make communicated information outdated. But, how to deal with this is not the focus of this paper.\\n\\n> The derivation of (18) using taylor expansion is unclear to me. Could the authors explain it with more details?\\n\\nIt is not suitable to be explained as a Taylor expansion. It is just alternative update as in DARTS (Liu et al. ICLR'19) to settle the problem of bi-level optimization. We have corrected this in the revision.\\n\\n> I don\\u2019t fully understand the proof of Proposition 4.2. Specifically, does \\u201cphi can be learned in a decentralized manner\\u201d mean that the optimal phi can be based on only the local observation for each agent, instead of based on global state? Could the authors comment on the approximation error induced by the mean-field approximation? Why the proof begins with $\\\\phi_i$ based on $o_i$ and ends with $\\\\phi_i$ based on global state s.\\n\\nSorry for the confusion. We have revised the paper to address this. We have deferred the approximation of state by observations (or history) to the end, which makes the mathematical part simple and clear. About the approximation error, we are afraid that we could not give a generic quantitative error analysis at this stage, because it is extremely hard to model and analyze the error brought in by the reduction of action dependency.\\n\\nWe notice that there are some works that study on the theoretical foundation and error analysis of MARL, but they usually rely on some strong and special assumptions, like exponential decay property and independent state (local observation) transition (Qu et al. 2019, arXiv, 1912.02906) which do not hold in many real applications and thus become not general but limited (Qu et al. NeurIPS'20). \\n\\n> In Equation (17) and (20), should phi^* be just phi (i.e. no * here)?\\n\\nTypos, $\\\\phi^*$ is supposed to be $\\\\phi$ here. We have corrected this. \\n\\n> The low-level policy is to optimize the shared rewards. My understanding is that any (single-agent) RL algorithm can be used for optimizing the shared rewards, eg DQN, DDPG, A2C, etc. Why would the authors choose DGN, a rather less popular RL algorithm? Have the authors tried more popular algorithms as the low-level policy?\\\"\\n\\nThe understanding is correct. We consider our LToS more of a new hierarchical MARL framework than a new method, so it's not restricted to DDPG+DGN but can be realized by diverse combinations of methods. We currently employed DGN in our experimental because it is capable to handle communication while others (DQN, DDPG, A2C) are not, and it has shown its advantage over others like CommNet (Sukhbaatar et al. NeurIPS 2016).\\n\\n> For fixed LToS, how do we determine the fixed sharing weights?\\n\\nFor *prisoner*, we set selfishness specially to average global return (i.e., 0.5). For *traffic* and *routing*, we used the best fixed selfishness found by grid search. We have made this clear in the revision.\"}",
"{\"title\": \"Responses to Reviewer 4\", \"comment\": \"> The contribution of the paper is mainly in formulating the problem in the actor-critic setup of DDPG method which leads to a limited novelty.\\n\\nOur main contribution is how to learn reward sharing so as to maximize the average return of all agents in networked MARL. LToS is a new hierarchical MARL framework rather than an actor-critic setup, so it is not restricted to current instantiation of DDPG+DGN but can be realized by diverse combinations of methods.\\n\\n> A key concern about the paper is how to decompose the reward in the first place. The paper aims at optimizing a global objective and assumes (also in the propositions) that this objective has additive connection with the decentralized rewards. Nevertheless, this is a strong assumption, particularly in real-world applications. A global reward can be decomposed into summation of smaller rewards, but not necessarily the other way around. As long as there is a global objective, we need a way to distribute the reward among the agents via learning or reward reshaping (or even manually). How can we properly define the reward of each agent in such scenarios?\\n\\nIn networked MARL, we do not decompose the reward, but each agent naturally has an individual reward, following the setting of Zhang et al. (ICML\\u201918), Qu et al. (NeurIPS\\u201919), Chu et al. (ICLR\\u201920), Qu et al. (NeurIPS\\u201920). For example, in traffic, the reward of each agent is the negative of the queue lengths. In networked MARL, the main problem is just how to advocate cooperating on the objective of maximizing the average of cumulative rewards of all agents, which serves as the global objective.\\n\\n> It is also unclear what is the benefit of sharing only with the neighbors. The method learns a weight vector of size |N_i| for every agent. Does it make a difference in the architecture/algorithm if we learn the weights of all the other agents (size |N|) instead? \\n\\nIn networked MARL, a common assumption is that the reward of each agent just depends on its action and the actions of its neighbors (Qu et al., 2020a). Thus, LToS only learns to share reward with neighbors, which is also limited by communication. For the case of |N|, each agent needs to output the weight for all other agents and take as input the weights from all other agents, and thus it becomes more centralized, which contradicts to the decentralized learning of networked MARL.\\n\\n> Formulating the weights as finite discrete values looks unnatural. If the method is designed for continuous action space, it is expected to have the notations to be continuous as well. Can we just simply convert the summations into integration in the propositions!?\\n\\nYes, we use summations only to ease the presentation.\\n\\n> The authors claim that the problem with the related work is that they cannot scale up with the number of agents. However, there is no (empirical) support that how the proposed approach deals with large-scale problems.\\n\\nActually, we do not argue that our method is very scalable while these factorization methods are totally not. We have amended the claims. Networked MARL focuses on decentralized learning as well as maximizing average return of all agents, while factorization methods are centralized training and focus the case all agents share a reward. Factorization methods can certainly be applied networked MARL. But in the literature it is empirically shown that QMIX performs poorly in large-scale networked MARL (Qu et al., 2020a). We also additionally performed the experiment of QMIX in 1) *traffic*, and it shows QMIX does not perform well as illustrated in Figure 4 and Table 2, and 2) *routing*, and it shows QMIX does not perform well either as illustrated in Figure 7 and Table 4.\\n\\n> In general, the experiments are small and based on simulation, and simulated scenarios are not considered real-world (which is claimed otherwise in the paper). I would recommend to incorporate more supportive empirical evaluation.\\n\\nWe have removed these in the revision.\\n\\n> Minor: What is $\\\\phi_{-i}$ in eq 17\\n\\nAs stated in Equation (2), it means the joint policy of all agents except agent $i$.\"}",
"{\"title\": \"Responses to Reviewer 1: part 1\", \"comment\": \"> \\\"Obviously, CTDE cannot address such problems due to the curse of dimensionality.\\\" CTDE means that there is the option to use centralized information at training time. Clearly, some ways of using centralized information will scale better than others and claiming that none of them scale is simply unfounded.\\n\\n> \\\"However, they are learned in a centralized way and hence not scalable.\\\" These methods have been scaled to large numbers of agents in complex environments. Please provide citations when making a claim that something doesn't scale. For example, the \\\"The StarCraft Multi-Agent Challenge\\\", Samvelyan et al 2020, includes results for numbers of agents comparable to the largest experiments in this paper.\\n\\nWe have revised these claims to make them accurate. We position LToS at a particular line of research of networked MARL, where CTDE methods may not easily scale up with the number of agents as empirically demonstrated in Qu et al., 2020a, including MADDPG, QMIX. We also additionally performed the experiment of QMIX in 1) *traffic*, and it shows QMIX does not perform well as illustrated in Figure 4 and Table 2, and 2) *routing*, and it shows QMIX does not perform well either as illustrated in Figure 7 and Table 4.\\n\\n> \\\"One is that the reward function... tragedy of the commons.\\\" I am struggling to make sense of this paragraph. Please work on the clarity of the writing.\\n\\n> \\\"The simple way to optimize the global objective is that each agent maximizes its own expected return, which is known as Markov game. \\\" This is wrong. When each agent optimizes their own expected return this is typically not a means of optimizing the global objective.\\n\\nWe have revised Introduction and Background to make them precise and clear.\\n\\n> \\\"Moreover, the observation of each agent $o_i \\\\in O_i$ can be enhanced to the Cartesian product of agent i and its neighbors (Jiang et al., 2020) or the observation history (Chu et al., 2020)\\\". I don't follow this. If the observation of each agent includes the observation of all neighbors (which includes the observation of their neighbors), then shouldn't everyone observe everything?\\\"\\n\\nThe observations of neighbors are obtained by communication, not naturally, and $o_i$ denotes the information available after communication. We have revised this part to make it clear. \\n\\n> \\\"In networked MARL, as the reward of an agent is assumed to depend on the actions of neighbors, we allow reward sharing between neighboring agents\\\": The reward function also depends on the global state, 's', which is a function of the joint action of all of the agents. So this local reward sharing seems clearly insufficient in general.\\n\\nAlthough the joint action of all agents determines the state transition, hence state distribution, only the actions in the neighborhood of an agent determine its reward at a particular state. Therefore, it is easy for an agent to learn to improve the return by sharing reward with neighbors, since the change of actions of neighbors directly affects the reward. However, the agents outside the neighborhood can only affect the return of the agent by the change of state distribution. It is hard for an agent to learn to amount the effect of reward sharing on the state distribution. Therefore, we consider only local reward sharing. \\n\\n> Equation (1) is wrong. The left-hand side conditions on 'o_i', but the right-hand side conditions on 's'. This also affects all following equations.\\n\\n> Theory: -4.3: \\\"Each vertex has its own local policy \\u03c6ij (wij |oi), and we can verify their independence by means of Markov Random Field.\\\" This is not clear to me. Furthermore, given that the transition function conditions on the joint action and that the reward function depends on the central state, this seems wrong. Unless I am mistaken, the dependency on the central state should break any locality assumptions.\\n\\nWe have revised the paper to clearly present the mathematical workflow behind LToS. Specifically, in (1), $o_i$ should be $s$, and in Proposition 2, $\\\\phi_{ij} (w_{ij} |o_i)$ should be $\\\\phi_{ij} (w_{ij} |s)$. The main difference is we deferred the approximation of state by observations (or history) to the end, which makes the mathematical part simple and clear.\\n\\n> Eqn 6 to 15: This proof seems unnecessarily cumbersome. W only redistributes the rewards, so the sum of total rewards is unchanged, qed.\\n\\nYes. We just want to put it more rigorous.\"}",
"{\"title\": \"Responses to Reviewer 1: part 2\", \"comment\": \"> \\\"Unlike existing hierarchical RL methods, we can directly construct the value function and action value function of $\\\\boldsymbol{\\\\phi}$ based on the value function of $\\\\boldsymbol{\\\\phi}$ at each agent.\\\": Constructing the value function isn't really the problem, but approximating and learning it is challenging.\\\"\\n\\nYes, it is no problem to construct a new value function. What we want to avoid is learning the new value function. By this property, we are able to reuse the value function of the low-level policy as we do in the current implementation of LToS. \\n\\n> The results on the prisoner's dilemma are misleading. Clearly, if there is an ability to change the reward functions of individual agents (which is assumed by LToS), there is no more social dilemma. As such, only baselines that maximize the total reward are credible comparisons (and seem to be missing completely).\\n\\nYes, for credible comparisons, we exactly use fixed LToS and NeurComm as baselines that maximize the total return. As can be seen in Appendix, we specially set their selfishness and $\\\\alpha$ to direct agents to maximize average return. As a result, they are able to cooperate eventually but converge much slower.\\n\\n> None of the results include uncertainty estimates. It is furthermore unclear how many seeds were used. Furthermore, the fixed LToS baseline (\\\"For ablation, we keep the sharing weights fixed for each agent, named fixed LToS\\\") seems odds. Did you try a baseline where all agents simply share their reward equally with their neighbors? Also, centralized baselines are missing. E.g: https://arxiv.org/pdf/1910.00091.pdf\\n\\nFirst, we do not include uncertainty estimates since the test does not contain any random factor. That is because we generated the vehicle and data packet flows once and kept that throughout the whole experiment, partly because vehicle routes have to be fixed in CityFlow. We use three random seeds and show in the tables the average results over them. For fixed LToS, we fix the selfishness and all neighbors have the same sharing weight. For example, if we fix the selfishness of each agent at 0.2 and each agent has four neighbors, then each neighbor gets 0.2. As we performed the grid search to find the best selfishness for fixed LToS in traffic and routing, the experiment should cover the case \\\"all agents simply share their reward equally with their neighbors.\\\" For centralized baselines, we additionally compared with QMIX, we can see that in the revision QMIX does not perform well in *traffic*.\\n\\n> in \\\"ROUTING\\\" Fixed LToS (ie. not learning to share) and LToS seem indistinguishable.\\n\\nWe believe that the figure could show some difference. We admit that for *routing*, the gap between LToS and other baselines is smaller compared to *prisoner* and *traffic*.\"}",
"{\"title\": \"On Positioning and Evaluation\", \"review\": \"The paper proposes a hierarchical multi-agent reinforcement learning method for the restricted communication setting and verifies the algorithm performance in a number of useful applications. The hierarchical approach to the networked MARL problem proves novel, effective, and interesting.\\n\\n+ Strengths:\\n\\n+ The work targets an arguably less explored area by focusing on the restrictions on inter-agent communication that may be present in realistic scenarios.\\n\\n+ Evaluation setup is varied, explained in detail and visualized in an intuitive manner.\\n\\n+ Niche is well-identified, and the contribution is clear.\\n\\n\\n- Major Concerns:\\n\\n- The reviewer had issues positioning the paper among the different lines of research. Although the research gap itself is clear (scalable MARL methods in a restricted communication setting), it isn't obvious why and how relevant the cited works are. For example, mentioning VDN, QMIX, and QTRAN (which together are some of the latest works in the factorization methods) does not seem to serve any further purpose, as they are no longer compared quantitatively or qualitatively to LToS. The authors' claim that they are not scalable leads the reviewer to anticipate that LToS naturally is scalable, but there appears to be no evidence whatsoever presented in the latter sections of the paper to show, let alone prove, the superior scalability of LToS, with, for example, growing numbers of agents and training times.\\n\\n- Furthermore, some of the cited works have been left out at the evaluation stage, which leaves the reviewer puzzled as to which baselines LToS really hopes to outshine. The work needs some justification over why the following studies have not been compared to in the evaluation:\\n\\nIf the main strength of LToS lies in its capability to function effectively and efficiently in restricted communications setting, comparison to one or more of the following works should be of great advantage in illustrating that edge:\\nDIAL/RIAL by Foerster 2016 - Learning to Communicate with Deep Multi-Agent Reinforcement Learning\\nBiCNet by Peng 2017 arXiv - Multiagent Bidirectionally-Coordinated Nets\\nCommNet by Sukhbaatar 2016 NeurIPS - Learning Multiagent Communication with Backpropagation\\nIC3Net by Singh 2019 ICLR - Learning When to Communicate at Scale in Multiagent Cooperative and Competitive Tasks\\nSchedNet by Kim 2019 ICLR - Learning to Schedule Communication in Multi-agent Reinforcement Learning\\n\\nIf the main strength of LToS lies in its capability to resolve selfishness and assign credits appropriately to bring about a harmonious cooperation in social dilemmas, analysis with respect to the this work should be helpful:\\nEccles 2019 CoRR - Learning Reciprocity in Complex Sequential Social Dilemmas\\n\\nIt would be interesting to draw some parallels between LToS and BAD, as both draw inspiration from a hierarchical decomposition:\\nBAD by Foerster 2019 ICML - Bayesian Action Decoder for Deep Multi-Agent Reinforcement Learning\", \"this_recent_aamas_paper_is_based_on_peer_evaluation_and_exchanging_evaluation_messages_computed_from_recently_obtained_rewards\": \"PED-DQN by Hostallero 2020 AAMAS - Inducing Cooperation through Reward Reshaping based on Peer Evaluations in Deep Multi-Agent Reinforcement Learning\", \"some_of_the_potential_issues_to_discuss_are\": \"bandwidth usage of message exchange, message overhead in sharing the neighbors' rewards.\\n\\nUsing neighbors' information to achieve scalability in MARL most likely requires discussion of mean-field methods, such as:\\nYang 2018 ICML - Mean Field Multi-Agent Reinforcement Learning.\\n\\n- Going through the Appendices spurred a great deal of curiosity, as the authors mention that all agents share the same, synchronized random number generator with the same seed across all the agents. This leads me to believe that the philosophy of decentralized learning is lost in LToS. Synchronization is definitely not cost-free; all the more so if the synchronized RNG is used to sample an experience from the agents' replay buffers. How do the agents synchronize their RNG in a decentralized manner?\\n\\n- In the Routing evaluation. has overhead been taken into account? How does LToS fare with respect to varied communications channel? What if the network were sparser? Do you observe any trends as you vary the extent of network connectivity?\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Learning to Share in Multi-Agent Reinforcement Learning\", \"review\": [\"The paper present a new method, called LToS which enables agents to share rewards in MARL. Two levels of policies, high-level and low-level, determines rewards and optimize global objectives. Three diverse scenarios were used to test the performance of LToS compared to other baseline methods. LToS consistently outperforms other methods. In the second scenario, authors also show the need for high-level policy by introduction fixed LToS.\", \"At the end of Introduction, the sentence \\u2018LToS is easy to implement and currently realized by DDPG\\u2026\\u2019 can be misleading because of the word \\u2018realized\\u2019 and the fact that authors argue that LToS is a newly proposed method. Does this mean LToS simply combines DDPG and DGN?\", \"Do Figure 5 and 6 represent selfishness of agents when LToS is used?\", \"Minor editorial errors in Appendix\"], \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"An interesting paper on learning to share rewards\", \"review\": \"Summary\\nThe paper considers the cooperative MARL setting where agents get local rewards and they are interconnected as a graph where neighbors can communicate. The paper specifically considers the communication of reward sharing, that is, an agent shares (part of) its reward to its neighbors, such that each agent optimizes its local reward plus rewards from its neighbors. This motivates a bi-level optimization framework where the high-level policy decides how the rewards are shared and the low-level policy locally optimizes the shared rewards given the high-level\\u2019s decision. The paper\\u2019s flow motivates such a framework well. The experimental results demonstrate the method\\u2019s effectiveness. I think it is a strong paper (accept), but my confidence is low due to the following confusions I have.\\n \\nComments/Questions\\n \\n1. I have a high-level comment on the reward sharing mechanism. It seems that the proposed method does not support multi-hop sharing because rewards can only be shared to neighbors. Why is this single-hop sharing effective in the experiments? Is it because of domain-specific reasons, or it\\u2019s because that single-hop sharing is in principle equally effective, why?\\n\\n2. The derivation of (18) using taylor expansion is unclear to me. Could the authors explain it with more details?\\n\\n3. I don\\u2019t fully understand the proof of Proposition 4.2. Specifically, does \\u201cphi can be learned in a decentralized manner\\u201d mean that the *optimal* phi can be based on only the local observation for each agent, instead of based on global state? Could the authors comment on the approximation error induced by the mean-field approximation? Why the proof begins with phi_i based on o_i and ends with phi_i based on global state s.\\n\\n4. In Equation (17) and (20), should phi^* be just phi (i.e. no * here)?\\n\\n5. The low-level policy is to optimize the shared rewards. My understanding is that any (single-agent) RL algorithm can be used for optimizing the shared rewards, e.g. DQN, DDPG, A2C, etc. Why would the authors choose DGN, a rather less popular RL algorithm? Have the authors tried more popular algorithms as the low-level policy?\\n\\n6. For fixed LToS, how do we determine the fixed sharing weights?\\n\\n---\\nThanks for the response. I've increased my confidence.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Needs some improvement\", \"review\": \"The paper addresses multi-agent RL problems by presenting a decentralized approach where the agents learn to share their reward with their neighbors. In this method, a high-level policy determines a weight vector for weighting the reward of neighboring agents, and then each agent learns their own independent policy. The learning is thus conducted locally in a partially connected network toward a common goal and without the knowledge of global state and actions.\\n\\nOverall, the approach is intuitive and interesting for decentralized learning in MARL tasks. However, I have some comments/questions for improving the paper that are summarized below. Hence, I vote to reject at this stage.\", \"pros\": [\"Intuitive design of communication among agents in decentralized setting\", \"Clever adaption of algorithms\", \"Well written paper and properly organized\"], \"comments\": [\"The contribution of the paper is mainly in formulating the problem in the actor-critic setup of DDPG method which leads to a limited novelty.\", \"A key concern about the paper is how to decompose the reward in the first place. The paper aims at optimizing a global objective and assumes (also in the propositions) that this objective has additive connection with the decentralized rewards. Nevertheless, this is a strong assumption, particularly in real-world applications. A global reward can be decomposed into summation of smaller rewards, but not necessarily the other way around. As long as there is a global objective, we need a way to distribute the reward among the agents via learning or reward reshaping (or even manually). How can we properly define the reward of each agent in such scenarios?\", \"It is also unclear what is the benefit of sharing only with the neighbors. The method learns a weight vector of size |N_i| for every agent. Does it make a difference in the architecture/algorithm if we learn the weights of all the other agents (size |N|) instead?\", \"Formulating the weights as finite discrete values looks unnatural. If the method is designed for continuous action space, it is expected to have the notations to be continuous as well. Can we just simply convert the summations into integration in the propositions!?\", \"The authors claim that the problem with the related work is that they can not scale up with the number of agents. However, there is no (empirical) support that how the proposed approach deals with large-scale problems.\", \"In general, the experiments are small and based on simulation, and simulated scenarios are not considered real-world (which is claimed otherwise in the paper). I would recommend to incorporate more supportive empirical evaluation.\"], \"minor\": \"What is \\\\phi_{-i} in eq 17\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Some interesting ideas, but issues with formalizing the problem setting, theory and unconvincing results.\", \"review\": [\"Update: I appreciate the detailed replies to my questions. Indeed, some of the points I raised were addressed well and the paper updated accordingly.\", \"However, some new concerns were also raised by the replies:\", \"Using 3 seeds for the experimental evaluation is an extremely questionable evaluation protocol. There is no way to know if any of the results are going to hold up.\", \"It's also clear now that none of the experiments are comparing to benchmark numbers from other publications. It would have been more confidence inspiring if the method was tested on a set of tasks where external benchmarks have already been established.\", \"This is particularly true for the new results that were added to the paper, e.g. the QMIX results. It's difficult to make sense of them and the instability points towards a potential hyperparameter issue.\", \"All baselines for the 'prisoners' case should at least compare to the fully cooperative case of adding up the rewards. Comparing to a DQN baseline that maximizes individual rewards is a red herring.\", \"It's odd that all experiments require less than 1000 episodes to train. This is very unusual for challenging multi-agent RL problems. It would be great to understand if the main selling point of LToS is sample-complexity/learning speed or if there is something else going on.\", \"I also agree with the concern raised by other reviewers that the paper is currently not positioned clearly.\", \"All things considered, I believe my score is still appropriate for the paper. However, I also believe that a future version of the paper with clarified positioning and more thorough experimental evaluation could make for a compelling contribution.\"], \"original_review\": \"==========\\n-\\\"Obviously, CTDE cannot address such problems due to the curse of dimensionality.\\\". CTDE means that there is the *option* to use centralized information at training time. Clearly, some ways of using centralized information will scale better than others and claiming that none of them scale is simply unfounded. \\n\\n-\\\"One is that the reward function.. tragedy of the commons.\\\". I am struggling to make sense of this paragraph. Please work on the clarity of the writing. \\n\\n-\\\"However, they are learned in a centralized way and hence not scalable.\\\" These methods have been scaled to large numbers of agents in complex environments. Please provide citations when making a claim that something doesn't scale. For example, the \\\"The StarCraft Multi-Agent Challenge\\\", Samvelyan et al 2020, includes results for numbers of agents comparable to the largest experiments in this paper. \\n\\n-\\\"Moreover, the observation of each agent oi \\u2208 Oi can be enhanced to the Cartesian product of agent i and its neighbors (Jiang et al.,\\n2020) or the observation history (Chu et al., 2020)\\\". I don't follow this. If the observation of each agent includes the observation of all neighbors (which includes the observation of their neighbors), then shouldn't everyone observe everything? \\n\\n-Equation (1) is wrong. The left-hand side conditions on 'oi_', but the right-hand side conditions on 's'. This also affects all following equations. \\n\\n-\\\"The simple way to optimize the global objective is that each agent maximizes its own expected return, which is known as Markov game. \\\". This is wrong. When each agent optimizes their own expected return this is typically not a means of optimizing the global objective. \\n\\n-\\\". In networked MARL, as the reward of an agent is assumed to depend on the actions of neighbors, we allow reward\\nsharing between neighboring agents\\\": The reward function also depends on the global state, 's', which is a function of the joint action of all of the agents. So this local reward sharing seems clearly insufficient in general. \\n\\n- Eqn 6 to 15: This proof seems unnecessarily cumbersome. W only redistributes the rewards, so the sum of total rewards is unchanged, qed.\\n\\n-\\\"Unlike existing hierarchical RL methods, we can directly construct the value function and action value function of \\u03c6 based on the value function of \\u03c0 at each agent.\\\": Constructing the value function isn't really the problem, but approximating and learning it is challenging.\", \"theory\": \"-4.3: \\\"Each vertex has its own local policy \\u03c6ij (wij |oi), and we can verify their independence by means of Markov Random Field.\\\" This is not clear to me. Furthermore, given that the transition function conditions on the joint action and that the reward function depends on the central state, this seems wrong. Unless I am mistaken, the dependency on the central state should break any locality assumptions.\", \"experiments\": [\"The results on the prisoner's dilemma are misleading. Clearly, if there is an ability to change the reward functions of individual agents (which is assumed by LToS), there is no more social dilemma. As such, only baselines that maximize the total reward are credible comparisons (and seem to be missing completely).\", \"The \\\"traffic\\\" and \\\"ROUTING\\\" experiments seem more interesting. A few caveats: None of the results include uncertainty estimates. It is furthermore unclear, how many seeds were used. Furthermore, the fixed LToS baseline (\\\"For ablation, we keep the sharing weights fixed for each agent, named fixed LToS\\\") seems odds. Did you try a baseline where all agents simply share their reward equally with their neighbors? Also, centralized baselines are missing. E.g: https://arxiv.org/pdf/1910.00091.pdf.\", \"In \\\"ROUTING\\\" Fixed LToS (ie. not learning to share) and LToS seem indistinguishable.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
hypDstHla7 | Neuron Activation Analysis for Multi-Joint Robot Reinforcement Learning | [
"Benedikt Feldotto",
"Heiko Lengenfelder",
"Alois Knoll"
] | Recent experiments indicate that pre-training of end-to-end Reinforcement Learning neural networks on general tasks can speed up the training process for specific robotic applications. However, it remains open if these networks form general feature extractors and a hierarchical organization that are reused as apparent e.g. in Convolutional Neural Networks. In this paper we analyze the intrinsic neuron activation in networks trained for target reaching of robot manipulators with increasing joint number in a vertical plane. We analyze the individual neuron activity distribution in the network, introduce a pruning algorithm to reduce network size keeping the performance, and with these dense network representations we spot correlations of neuron activity patterns among networks trained for robot manipulators with different joint number. We show that the input and output network layers have more distinct neuron activation in contrast to inner layers. Our pruning algorithm reduces the network size significantly, increases the distance of neuron activation while keeping a high performance in training and evaluation. Our results demonstrate that neuron activity can be mapped among networks trained for robots with different complexity. Hereby, robots with small joint difference show higher layer-wise projection accuracy whereas more different robots mostly show projections to the first layer. | [
"Reinforcement Learning",
"Machine Learning",
"Robot Motion Learning",
"DQN",
"Robot Manipulator",
"Target Reaching",
"Network Pruning"
] | Reject | https://openreview.net/pdf?id=hypDstHla7 | https://openreview.net/forum?id=hypDstHla7 | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"5H730Hl_UOt",
"SQ9AiSN-LuA",
"yxS0PGeDbRI",
"qpRRH9UvWqu"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040355907,
1603826390508,
1603807073688,
1603710769629
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3484/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3484/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3484/AnonReviewer4"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"The paper analyzes neuron activations for neural networks trained via RL to perform reaching with planar robot arms. This analysis includes an evaluation of the correlation between neurons of different models trained to control arms with different degrees-of-freedom. In performing these evaluations, the paper proposes a heuristic pruning algorithm that reduces the size of the network and increases information density. Correlation is assessed based on a projection of the source network on the target network.\\n\\nThe paper is well written and considers a challenging problem of interest to the community. The proposed pruning strategy as a means of maximizing information content is reasonable and seems to perform well. However, the significance of the contributions is limited by the experimental evaluation. The experiments consider a large number of models, however the scope of problems on which the method is evaluated is narrow, making it difficult to draw conclusions about the merits and significance of the work. The authors are encouraged to extend the analysis to a more diverse set of problems.\"}",
"{\"title\": \"Thorough analysis in a limited scope\", \"review\": [\"#### Summary\", \"The authors present a method for analysing neuron activity in neural networks trained via RL on a multi-joint planar reaching task, as well as correlating neurons between different models trained on tasks with potentially a different number of joints. The methods consists of three steps:\", \"1. Compare different neurons within a model using normalised activation traces over a number of episodes, and cluster hierarchically based on similar activity.\", \"2. Use said clusters to prune the networks based on merging neurons within a cluster with an intra-cluster distance below some threshold, alternated with retraining.\", \"3. Compare different models by optimising a linear projection between neurons, and evaluate reconstruction error, coverage and saturation.\", \"Results indicate the proposed pruning method is effective in reducing the number of neurons without affecting accuracy, as well as showing correlations between corresponding layers of different models, though these reduce with larger difference in number of joints.\", \"#### Pros\", \"The authors perform a sufficiently thorough evaluation, with a large number of models compared and reasonable ablations, baselines and metrics.\", \"While descriptions are brief, the method is generally well described and mathematical notation consistent.\", \"The proposed heuristic pruning approach seems to perform well in this case, as evident by all model sizes converging to the same size in Fig. 3.\", \"The approach of first pruning networks to maximise information content in the activations before correlating different models makes a lot of sense.\", \"#### Cons\", \"The authors frame their work within the context of feature reuse and explainability, however the presented work is limited to showing correlations between features learned on identical or very similar tasks. It is unclear how this enables either reuse or explainability and perhaps not surprising that these correlations for very similar or identical tasks exists per se, more interesting would be to see how these can exploited. These correlations also degrade very rapidly with an increasing number of joints. I hypothesise that one would perhaps see stronger correlations between more different tasks if not the morphology was changed but rather the objective / reward. The scope of a planar reacher may also be too limited to draw more general conclusions for other control tasks.\", \"Related to task differentiation, a potential weakness in the proposed methods is how the activation traces are generated for source and target model when optimising the projection. Effectively only the target model is evaluated within distribution, after which the inputs observed there are then remapped to the source model. It's unclear what the effect is of potentially evaluating the source model out of its training distribution. I was hoping to see a way to correlate trajectories collected with the respective models independently. Perhaps the type of problem considered only allows a singular solution even across different number of joints, but it would be good to verify this.\", \"While the authors do evaluate a large combination of models, only averages are reported. Given how close the results seem to be to random in Fig. 5, it's hard to gauge the significance of the results. Some variance or error metric would be very valuable.\", \"While the idea of pruning the networks before correlating intuitively seems like a good idea, this is not experimentally validated. It would be good to add a comparison with and between unpruned models as well.\", \"While okay to follow, the text could use a bit more polish.\", \"#### Questions\", \"There's currently no mention of how all these models were trained. One caption hints at DQN? Please provide more details.\", \"It's unclear how to interpret training duration in Fig. 3. Is this the time required to \\\"pass\\\" the validation set again after pruning?\", \"What do the values in the table in Fig. 5 represent? Sums of weights in the projection matrix?\", \"#### Conclusion\", \"While overall the method presented makes sense, and the evaluation is relatively thorough, the scope of the problems evaluated is considerably limited to draw any general conclusions of its validity, and some of the framing and details raise questions. As such I'd consider this submission marginally below acceptance.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Potentially a good paper requiring more conclusive results\", \"review\": \"# Summary\\nThe paper presents a technique to compare networks trained to solve similar tasks trained in different context. The considered task is reaching with a robotic planar arm; the considered context is varied varying the robot degrees of freedom. The goal of the paper is to find correlations across neural activity patterns across networks trained to solve the same task in different contexts.\\n\\nTo achieve their goals, authors propose an heuristic network pruning algorithm to reduce the network size while keeping performance in training and evaluation. To correlate different networks, authors propose a technique to project a source network onto a target network.\\n#### Clarity\\nThe paper is well written and easy to read. \\n#### Originality \\nThe paper follows a main stream research which aims at pre-training neural networks on general task to speed up learning of specific tasks. As far as I know (I am not an expert in the field), the proposed analysis is original.\\n#### Significance \\nThe significance of this work is relatively low. The results could be more conclusive with further analysis and experiments.\\n#### Major comments\\n* **Kinematic redundancy**. Authors have chosen a specific task, nominally reaching a target with a planar manipulator. Within this context, in presence of more than two degrees of freedom (i.e. with kinematic redundancy), the solution of the task itself is non-unique. Therefore fixing the context (i.e. fixing the robot kinematics, the robot geometry, etc) doesn't guarantee that the RL algorithm will find similar solutions across different runs. Actually, given the random training procedure each solution should come up with completely different strategy, exploiting the kinematic redundancy. If so, how do authors compare networks which exploit differently the kinematic redundancy? Are the proposed metrics (greedy mapping, linear mapping, etc) invariant with respect to different solutions to the same task (i.e. invariant to kinematic redundancies)?\\n* **Results and conclusions**. Goal of the paper (mentioned in the first two sentences of the abstract) is to progress in understanding if pre-training of end-to-end RL can be used as feature extractors and hierarchical organizations. Despite what claimed in the conclusions (\\\"Networks trained for robots with only small joint number difference show a good correlation of neuron activation, for small differences this correlation can be found layer-wise.\\\"), authors fail short in giving a sound explanation of why this is the case.\\n#### Major comments\\n* **Page 7, line 8 from the top.** \\\"[..] the reflexive mapping\\\". This was not mentioned before, authors should give more details.\\n* **Page 7, line 1 from the bottom.** \\\"Are joint numbers very different a proper input transformation is crucial to find correlations\\\". Please check this sentence. \\n* **Page 8, caption of figure 5.** \\\"balanced mapping \\u03b81\\u2032 = \\u03b81 , \\u03b82\\u2032 = \\u03b83 (4b) we apply in contrast to the naive mapping \\u03b82\\u2032 =\\u03b82,\\u03b82\\u2032 =\\u03b82(4a).\\\" It's unclear what these mappings refer to and how they have been used.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting idea worth exploring, however, it needs to be developed further.\", \"review\": \"### Summary\\n\\nThe authors investigate individual neuron activations over time, and compare the neuron activations within individual networks all-to-all and layer wise. \\n\\nA distance metric is introduced and utilized to set up a pruning procedure to maximize the information density in learned neural networks and reduce redundancy as well as unused network nodes.\\n\\nFinally, neuron activations are used to assess the correlations between learned policy networks for manipulators with a varying number of degrees of freedom. A projection mapping between different policy networks is implemented and analysed, as a type of transfer learning between different robot morphologies.\\n\\n\\n### Review\\n\\nI believe that this is an interesting work which tries to understand the inner-workings of a robot-control policy network by examining the network activations, and further using this information to prune the unnecessary neurons. Transferring learned network policies between robot morphologies is very useful and preliminary insights seem interesting.\\n\\nThere are some important flaws that need to be addressed regarding the clarity of the methodology and contributions, as well as the significance of the experimental evaluations. \\n\\nMy impression is that although the work tackles an important problem with a good idea, this is still an incomplete work, as the presented experimental evaluation is insufficient to draw significant conclusions.\\n\\nBelow are some of the comments organised by sections, including concrete suggestions for improving the work.\\n\\n1. INTRODUCTION\\n\\n The main motivation and goal should be presented explicitly in a separate paragraph. There seems to be a missing link between the neuron activation estimations for pruning and correlation analysis for transfer mapping.\\n\\n2. RELATED WORK\\n\\n I do not see the relevance of Bellemare et al. (2013), Mnih et al. (2015), Chess Silver et al. (2017), Go Silver et al. (2016) and Lillicrap et al. (2015) to the specific problems investigated in this paper.\\n\\n It would be useful to consider other work investigating NN complexity, and adding a discussion on how it relates to this work:\\n - Gaier, Adam, and David Ha. \\\"Weight agnostic neural networks.\\\" Advances in Neural Information Processing Systems. 2019.\\n - Li, Chunyuan, et al. \\\"Measuring the intrinsic dimension of objective landscapes.\\\" International Conference on Learning Representations. 2018.\\n\\n3. EXPERIMENTAL SETUP\\n\\n \\u201cplanar space robot manipulator that represents a multitude of real world applications\\u201d Do you mean that this task is a surrogate for examining many applications? This seems like a strong statement as it represents a small subset of potential applications.\\n\\n Moreover, planar space usually refers to having a horizontal plane as a task space. In this case it would be more clear to say \\u201coperating in a vertical plane\\u201d.\\n\\n \\u201cA neural network is trained with end-to-end Reinforcement Learning\\u201d this usually means from input images to output torques, but in the presented approach position control is used, so this should be emphasised.\\n\\n \\u201cphysical robotic simulation\\u201d, usually it is said \\u201cphysical robot\\u201d referring to a real robot experiment, or a \\u201crealistic robot simulation\\u201d which refers to a simulation that takes into account the real robot component values (dimensions, mass, inertia\\u2026).\\n\\n Equation 1 is not clear as the text says that $\\\\textbf{x}$ consists of joint angles $\\\\hat{\\\\theta}_i$ but eq 1 shows the sin and cos projections of the angles. Moreover, the index $i$ seems to refer to both the time-step index $t_i$, as well as the joint index $\\\\hat{\\\\theta}_i$. Also, $n$ is not introduced as the number of joints.\\n\\n What is the reason behind mapping the target distance into a (0,1] range?\\n\\n Another important aspect which I believe should be addressed, is what is the effect of the task which is learned on the activation correlations? Basically, different tasks would have a different state distribution seen at the input of the policy? One simple example would be examining different types of control - position vs velocity vs torque.\\n\\n\\n4. NEURON ACTIVATION ANALYSIS\\n\\n This section provides an analysis for the 3DOF robot only and should be emphasised.\\n\\n \\u201cWe define a distance metric between neurons that is based on the neuron activation history in scope of every episode in order to account for the dynamics in motion trajectory learning.\\u201d It is a bit unclear how the activation history is evaluated. This is one of the most important parts of the paper and should be made very clear.\\n\\n I am not 100% sure that the distance metric should be referred to as Euclidean distance, as it does not operate on euclidean space, so I think it would be sufficient to say \\u201ca proposed neural activation distance metric\\u201d.\\n\\n \\u201cFor a set of sample episodes E representing the agents action space\\u201d How do episodes represent the action space exactly, I do not understand this part completely.\\n\\n The notation is a bit confusing, as $n$ refers both to the number of joints and the number of neurons. Also, what does the superscript $T$ in $R^T$ stand for?\\n\\n It seems to me that there is another summation missing in equation 2, as the indices of neurons should be more generic like $n_i$ and $n_j$ (unless there are only 2 neurons?). Also, are these neurons form the same layer or all the neurons in the network? This should be clarified in the text and formulated in Eq 2.\\n\\n What is $C$ in equation 3? I assume it is the cluster and cluster size, but this should be explained explicitly.\\n\\n Some of the distributions in Fig 2 are a bit skewed and look more like Beta distribution rather than Gaussian. Why is having a Gaussian distribution of distances relevant? \\n\\n There might be some other visualisation method that could be used here to shed light on the findings. Because currently, it is difficult to see any significant differences between the plots.\\n\\n The reference to Fig 2 should be improved, the definition of the trained, random and pruned lines is missing (both in the figure and the text). \\n\\n What does \\u201call-to-all distribution of trained networks\\u201d mean (all-to-all neuron comparison)? \\n\\n The findings for the clustering (Fig 2 bottom) are very interesting! Could you maybe elaborate on these more?\\n\\n\\n5. HEURISTIC NETWORK PRUNING\\n\\n What is the motivation for retraining after pruning? If neurons that have similar activations are pruned, what would happen if one of them is kept with the corresponding weights? How would this affect the performance? This could be an insightful baseline comparison.\\n\\n Moreover, what is the advantage of reusing the network weights for initialisation, instead of randomly initialising them? This would also be an interesting experiment to conduct.\\n\\n Please introduce what are \\u201cdead neurons\\u201d.\\n\\n Wrong reference to Equation (5) should be (4)\\n\\n $\\\\tau > 2$ \\u2192 $\\\\tau > 0.2$ \\n\\n The accuracy in Fig 3 left, is given in [%], is this a mistake, because then it seems that the initial accuracy is only 0.8%. If this is actually 80%, why is the optimal $\\\\tau = 0.2$ as the corresponding accuracy has fallen to 20%. How is this evaluated? Does the green label correspond to pruned or initial network size?\\n\\n\\n6. CORRELATIONS IN NETWORKS TRAINED FOR MULTI-JOINT ROBOTS\\n\\n It would be useful to start with a high level overview of what is the goal of finding these correlations in addition to how they are calculated. It seems that the correlations between networks are not examined, rather the mappings between them. Therefore this is slightly misrepresentative of what is actually being done.\\n\\n How are activation matrices A and B related to P? Also, what are $\\\\alpha_m$ and $\\\\beta_k$ ? \\n\\n What is the motivation behind making $\\\\bar{P}^g_{km}$ sparse according to minimal distance in Greedy mapping, or applying L1 regularisation in Linear mapping? I assume your goal is to map joints 1-1 rather than combining them? Please explain this in a more clear way if possible.\\n\\n Equation 5 does not show the variable which is optimised. What does $\\\\alpha^{\\\\downarrow}_m$ stand for?\\n\\n Equations 6 and 7 are not referred to in the text.\\n\\n The quantities defined in equations 6, 7, 8 and 9 should be properly introduced as evaluation metrics and named accordingly, in a separate paragraph.\\n\\n Figure 5 correlations (top right) would be more impactful if represented with a heatmap matrix in addition to the numbers. The graphs on the bottom are not very clear.\\n\\n The discussion of the mapping results is very interesting and should emphasise the mapping between different robot morphologies. For example the difference between 4 -> 2 and 2 -> 4, where the latter has a higher error which could be expected as there is not enough information stored which can be decoded. Having additional comparisons of robot manipulators with larger differences in DOF would probably emphasise this and support the given conclusions better. This would also strengthen the paper significantly.\\n\\n Another metric which would be necessary to evaluate the transfer procedure, is to evaluate the mapped network on a test set of the reaching task.\", \"other_comments\": [\"Figure captions should be larger\", \"Several typos\", \"Consistency in using \\u201c3 joint manipulation task\\u201d or \\u201c3-DOF manipulator\\u201d\", \"In the conclusion it is stated: \\u201cIn this paper we analyzed individual neuron activation and correlations between neural networks trained for goal reaching of a variety of planar space robot manipulators.\\u201d Having the same robot structure with 3 different DOFs is not sufficient to be considered as a variety of manipulators.\", \"It would significantly help the clarity of the paper to split certain sections into thematic paragraphs.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
guEuB3FPcd | AlgebraNets | [
"Jordan Hoffmann",
"Simon Schmitt",
"Simon Osindero",
"Karen Simonyan",
"Erich Elsen"
] | Neural networks have historically been built layerwise from the set of functions in ${f: \mathbb{R}^n \to \mathbb{R}^m }$, i.e. with activations and weights/parameters represented by real numbers, $\mathbb{R}$. Our work considers a richer set of objects for activations and weights, and undertakes a comprehensive study of alternative algebras as number representations by studying their performance on two challenging problems: large-scale image classification using the ImageNet dataset and language modeling using the enwiki8 and WikiText-103 datasets. We denote this broader class of models as AlgebraNets. Our findings indicate that the conclusions of prior work, which explored neural networks constructed from $\mathbb{C}$ (complex numbers) and $\mathbb{H}$ (quaternions) on smaller datasets, do not always transfer to these challenging settings. However, our results demonstrate that there are alternative algebras which deliver better parameter and computational efficiency compared with $\mathbb{R}$. We consider $\mathbb{C}$, $\mathbb{H}$, $M_{2}(\mathbb{R})$ (the set of $2\times2$ real-valued matrices), $M_{2}(\mathbb{C})$, $M_{3}(\mathbb{R})$, $M_{4}(\mathbb{R})$, dual numbers and the $\mathbb{R}^3$ cross product. Additionally, we note that multiplication in these algebras has higher compute density than real multiplication, a useful property in situations with inherently limited parameter reuse such as auto-regressive inference and sparse neural networks. We therefore investigate how to induce sparsity within AlgebraNets. We hope that our strong results on large-scale, practical benchmarks will spur further exploration of these unconventional architectures which challenge the default choice of using real numbers for neural network weights and activations. | [
"Sparsity",
"Pruning",
"Efficiency",
"Mathematics"
] | Reject | https://openreview.net/pdf?id=guEuB3FPcd | https://openreview.net/forum?id=guEuB3FPcd | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"kzKLnXf-M74",
"VF52BomZ2Nb",
"XLutTBTn4O",
"x_NtIWstj3y",
"beqnp3pzIr",
"ULa8RnY-aUC",
"PEdsqqhMCw",
"9PElhsrS11z",
"AHl56e811mE",
"fNwPsjdj4Kr",
"jwaCE4SlSJG",
"EQScenKpK1t",
"KHqXw_Vu8sh",
"P9MLsJMrRQW",
"FxUQ8wGlRce",
"_sCd8EyDKe",
"kYdrqZBz_b",
"k3b56qlIUhz",
"b13emmFhpZs"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040500801,
1605984894780,
1605984371465,
1605971391504,
1605895565388,
1605565805533,
1605547065403,
1605532738740,
1605393806007,
1605391349252,
1605377224178,
1605377037259,
1605376992765,
1605374822548,
1605374551316,
1605314566723,
1604032652222,
1603847008646,
1603598926327
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3482/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3482/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3482/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3482/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3482/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3482/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3482/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3482/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3482/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3482/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3482/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3482/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3482/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3482/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3482/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3482/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3482/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3482/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"The paper proposes to deep neural network models with elements of the weight from algebras, and considers a wide range of algebras and large scale promising experiments. The paper raised a heated discussion.\", \"pros\": [\"Using algebras, one can hope for more efficient architectures\", \"Numerical experiments on a wide range of problems\"], \"cons\": [\"The theoretical grounding provided in the current version of the paper is not sufficient. The study is empirical (nothing wrong about it), but there is no clear understanding/explanation of why particular choice is better than another, and also why it works in the particular setup.\", \"The title does not reflect the content of the paper. It is too broad, and also in some sense \\u201cprovocative\\u201d. The reader expects something much more significant from it.\", \"Experiment setup: the resulting flops/accuracy figure (main result, Figure 1) does not contain error bars. I.e., the accuracies should be averaged over several random seeds in order to guarantee the resulting metrics. Also, this figure does not show a clear advantage over the ResNet-50 baseline.\"]}",
"{\"title\": \"You do not understand.\", \"comment\": \"If you do not understand them, try to discuss them in detail with your mentors. It is OK to reply/respond that you have better logics while the reviewer may not fully appreciate your work, but do not try to dispute without fully understanding the comments.\"}",
"{\"title\": \"Would like to reconsider if you can address the following concerns.\", \"comment\": \"1. Please point out tasks that may have clear need to switch to the proposed algebra sets. I am NOT saying that \\\"the provided tasks in the paper are not important\\\". I am questioning that in a top AI conference, what kind of readers would benefit from such an engineering switch, proposed by this paper.\\n\\n I notice the authors questions the review comments. But actually, the meaning and question is like the above.\", \"i_wrote_several_sentences_to_encourage_the_authors_to_provide_responses\": \"\\\"Would the authors be able to justify this?\\\" \\\"The concern is not \\\"What tasks would you like to see?\\\" but why engineers need such a switch.\\\"\\n\\n \\\"I would hope the authors clarify their methodology, and then present the advantages obtained in the experiments.\\\" \\n \\n \\\"I would simply ask the authors to respond to a direct question: how would you like the community to appreciate your work?\\\"\\n\\n \\\"ImageNet is not that challenging and there may be no clear need to switch to complex numbers\\\". For example, if the authors believe ImageNet is your choice, please give more clear reasons for asking many engineers/readers to try your approach. Please be advised, if ICLR accepts this paper, many readers will be interested and would like to try it out. If you can clearly justify it, I would be happy to re-evaluate. \\n\\n You previous response \\\"We are surprised that you do not think ImageNet is not a suitable task for demonstrating this method: it is a challenging task and is widely used as a way to benchmark state-of-the-art methods. \\\" does not address my above concern. \\n \\n\\n2. A second concern about your experiments \\\"I would hope the authors clarify their methodology, and then present the advantages obtained in the experiments.\\\" \\n\\n You mentioned some recent and close work use similar performance metrics. However, the question is would you please \\\"clarify the experiment methodology\\\". Mentioning recent work is good, but does not FULLY address my concern.\\n\\n3. Which category would you claim your work to be \\\"Doing something new, doing something important, doing something new and important\\\"?\\n\\n4. Please try to point out the scientific values behind. It can be simple but effective. For example, when I read the manuscript, one can hardly believe \\\"2x2 matrix rings)... work better than anything ....\\\" this claim is rather problematic. Try to provide answers and convince the reviewer and future readers, using two or three sentences (clear logics).\\n\\n5. This exaggerated claim raised concerns about the rigorous of the methodology of this work.\\n\\n Your response can justify that your work has not such exaggeration. Try to provide response that address such a concern, if future readers also challenge such a exaggeration.\\n\\n\\nAll the above concerns (and some other in my comments), are asking for your clarifications. \\n\\nIf you do not understand them, try to discuss them in detail with your mentors. It is OK to reply/respond that you have better logics while the reviewer may not fully appreciate your work, but do not try to dispute without fully understanding the comments.\"}",
"{\"title\": \"Alternative Title?\", \"comment\": \"Because the reviewer has such strong concerns about the title, we wonder if changing the title would allow the reviewer to reconsider their opinion on the rest of the paper?\"}",
"{\"title\": \"Anything more we can address?\", \"comment\": \"As the discussion period ends soon, we were wondering if there are any more concerns we could address to help increase your score of our work?\"}",
"{\"title\": \"Re: concern of new insights\", \"comment\": \"There seems to be quite ab bit of discussion regarding the title, and if calling it \\\"AlegbraNets\\\" might have overpromised and underdelivered; I understand the other reviewers' concerns. However, upon re-examining the paper, I believe there may be enough merit to warrant acceptance. I agree with other reviewers' that this work may not be entirely novel (which I point out in my review as well); however, I see this is as a valuable contribution for the following reasons:\\n\\n1. To my knowledge, one of the first publications to empirically show the value of other algebras on established datasets (ImageNet, enwiki-8) and respective near-SOTA model architectures such as transformer-xl. Deep complex networks motivates the line of research, but I would consider CIFAR-10/100 to be more of toy examples. These results, if broadly disseminated, has the potential to encourage subsequent contributions. I see the leap from examples to larger-scale results as impactful. If there are other publications that establish similar results, please share and I can certainly be convinced otherwise.\\n\\n2. On a similar note, a search (and exploration) of algebras beyond complex numbers is valuable and their recommendation of using 2x2 matrix rings as optimal under a computational notion is promising.\\n\\n3. Upon seeing the authors' response on memory footprint, I do see there is a tradeoff between the computation and memory footprint, making it a design choice. Something that would strengthen the paper a bit more is if they can define a cost model based on standard hardware (e.g. GPU/TPU) and show how using a 2x2 matrix algebra is conclusively better than real numbers.\\n\\nOverall, my vote of confidence is for the empirical results on widely adopted convolutional models, transformer-xl etc., the ease of usage, etc. This could be one of the papers that spur the paradigm shift from real numbers to other algebras in the SOTA spectrum of models. Unless there are prior results I am unaware of that have shown similar results\"}",
"{\"title\": \"Updated Version\", \"comment\": \"We would like to thank the reviewers for the comments. We have updated the manuscript. We have moved the figures such that they appear closer to where they are referenced. We updated the citation style, as asked for. We also standardised the presentation of citations. In the supplement, we added a discussion of the change in activation memory.\"}",
"{\"title\": \"Response\", \"comment\": \"The reviewer seems to believe that using the algebras in this paper would be a large engineering challenge. We note that it is fairly simple to implement these networks in Tensorflow or Pytorch, just as we\\u2019ve shown in the appendix for JAX. This does leave some performance on the table, but this is a natural progression for almost all new ideas. First it is demonstrated that they work, then later maximum performance implementations are created. The initial implementations of real-valued convolutions were far from optimal as one example.\\n\\nIt is also odd to claim that \\u201csome improvements on well-studied datasets are not enough\\u201d when probably a majority of all papers accepted will do exactly this \\u2014 show improvements on well-studied datasets. Indeed, one should be skeptical of claims on poorly studied datasets, it is far harder to show gains on well studied datasets. We strongly disagree with the implication that because \\\"we already have very effective neural networks\\\", research on _more_ effective techniques is not necessary. \\n\\nThe reviewer repeatedly claims that \\u201cImageNet \\u2026 is not convincing enough\\u201d and \\u201cexperiments and claims are from two tasks (on two datasets) which are not enough\\u201d, but then when we ask directly for which additional tasks would be useful to include, they instead say \\u201cThe concern is not \\u2018What tasks would you like to see?\\u2019 but why engineers need such a switch.\\u201d If the reviewer could clarify their position, the authors would find it most helpful. And for what it\\u2019s worth, we would like to clarify that we believe we have three tasks and four datasets. Image classification, character level and word level language modeling are the tasks, and ImageNet, CIFAR-10 (appendix), enwik8 and WikiText-103 are the datasets\"}",
"{\"title\": \"Response to Item 3\", \"comment\": \"Sorry for not responding to the third point-- in Figure 1, we are showing performance per parameter (left) and performance per FLOP (right). This is important because while an algebra may be able to reduce the parameter counts, there may be an increased FLOP cost, especially due to a procedure like whitening. We note that in earlier work (Deep Complex Networks and Deep Quaternion Networks) the cost of whitening was largely ignored, since results focused on parameter efficiency. In terms of performance-per-parameter, many algebras are actually more performant than the baseline. However, in terms of FLOPs, only M_2(R) is able to match the baseline performance. It is important to note, however, that these algebras have added benefits: specifically, the higher compute density. Aside from being important for sparsity, with proper kernels (or even hardware!) the performance gap may be negligible.\"}",
"{\"title\": \"Please clarify my question\", \"comment\": \"Thanks for your response.\\n\\nSure, there are. Because you don't respond to my above item 3. If the performance could not be improved, what is the meaning of your complex model?\"}",
"{\"title\": \"Thank you for the review\", \"comment\": \"Thank you for your careful read of our manuscript. We appreciate the comments very much, and will update the paper with the changes to the references and try to better stagger the introduction and reference to the different figures in the manuscript.\\n\\nAre there technical issues we can address/clarify/improve that would help improve the perception of our work?\"}",
"{\"title\": \"Thank you for your review\", \"comment\": \"Thanks for the review. We have tried to address your section of cons below, and will update the text in the next few days to reflect these changes. We, of course, thank you for listing the pros of our work, we agree that the exploration of alternate algebras is both useful and impactful!\\n\\n> The authors motivate this work with computational efficiency; however, I did not find any discussion or comments on the total memory footprint. Do any of the algebras require us to keep track of partial computations/intermediate steps - subsequently increasing the total memory footprint? In the case of vision examples, which are dominated by the activations, what are the implications? If the memory footprint is indeed not consistent with a real-valued algebra, then are we trading model/input size for fewer parameters/efficient computation?\\n\\nThere are two issues that could increase the memory footprint. The first is that regardless of implementation, the number of activations will be larger by a factor of about 1.3 (empirically) for the M2R networks when matching the performance of the real network.\\n\\nThe second issue is that with our current implementation, there are indeed intermediate feature maps that could increase the memory usage. For M2R there are 8 convolutions of size C/4, which means the memory usage would approximately be doubled. However, we note that if the appropriate kernels were written to perform the algebra calculation at the lowest level, then this doubling overhead would not exist. \\n\\nWe will update the text saying that this is a possible concern and point out mitigating strategies.\\n\\n\\n> Are certain algebras more amenable to specific hardware architectures? If so, a brief discussion would enhance the paper overall.\\n\\n\\nThe matrix algebras would map nicely to the currently popular systolic arrays common on accelerators such as GPUs and TPUs. Although the arrays on current GPUs and TPUs are bigger than sizes considered here, it is possible that future hardware could move to smaller arrays. Having a larger number of smaller systolic arrays would map nicely to sparse algebra networks. It would also be possible to build specific algebra multipliers at the hardware level for any algebra.\\n\\nThese algebras would also accelerate inference cases that would otherwise have a batch size 1 and be completely bandwidth limited, by increasing the compute density even in this case.\\n\\nWe agree this is an interesting direction and will add a section to the appendix that emphasises this further.\"}",
"{\"title\": \"Still huge title without convincing contributions\", \"comment\": \"1. \\\"Do you mean why are more efficient neural networks necessary? Or why are different algebras necessary for more efficient networks? \\\"\\n The target project aims to improve efficiency by complex algebras, I am interested why go for it? yes, there is performance gain, also there is overhead, and not compatible in TensorFlow/PyTorch (as is used by the wide community). Why it is necessary to go for it? It is OK to be a small group of researchers or a particular industrial product. As in the following points, I think claiming ImgeNet should go for it is not convincing enough.\\n\\n2. Do not be \\\"surprised that you do not think ImageNet is not a suitable task for demonstrating this method: it is a challenging task and is widely used as a way to benchmark state-of-the-art methods. We also have results on enwik8 and wikitext-103 language modeling\\\"\\n The reasons are we already have very effective neural networks. I do not see clear reason why we urgent engineers switch to much more complex algebras. If the reviewers vote for an acceptance of \\\"AlgebraNets\\\" to ICLR, some improvements on well-studied datasets are not enough to justify why ICLR accepts such a big title.\\n The concern is not \\\"What tasks would you like to see?\\\" but why engineers need such a switch.\", \"another_very_direct_question_would_be\": \"the compared schemes in ImageNet were targeting at improving accuracy, now your results claiming better computation efficiency. Actually, more fair comparison would be those compression schemes (EfficientNets, MobileNets (included), complex-valued nets), right? The current presentation of the evaluation methodology is not convincing.\\n\\n2. \\\"We feel there are a series of important contributions in the work: a.) b) c) and d)\\\" \\n Those claims are interesting, but are far from a support of \\\"AlgebraNets\\\". The experiments and claims are from two tasks (on two datasets), which are not enough.\"}",
"{\"title\": \"Thank you for your review\", \"comment\": \"Thank you for your review. Below, we\\u2019ve tried to answer your questions, but firstly here is our motivation for this work, which we hope will help frame both the manuscript and our response.\\na) We wanted to search for more efficient alternatives to real numbers to use in neural networks. This was the goal from the beginning. We were especially interested if we could combine the higher compute density of algebras with sparsity.\\nb) There had been some prior work showing complex numbers and quaternions were more parameter efficient but nothing about FLOPs. FLOPs are correlated with runtime and often at least as important as parameter efficiency, especially in vision models.\\nc) We noticed the prior work on complex and quaternions used very FLOP expensive whitening and special initialization.\\nd) We chose to investigate those algebras and many more on both a parameter and FLOP efficiency basis. Furthermore, we made preliminary steps towards testing sparsity inducing techniques and these algebras.\", \"in_response_to_your_specific_queries\": [\"Do you mean why are more efficient neural networks necessary? Or why are different algebras necessary for more efficient networks? They are one approach to finding more efficient architectures, but certainly not the only one.\"], \"we_are_surprised_that_you_do_not_think_imagenet_is_not_a_suitable_task_for_demonstrating_this_method\": \"it is a challenging task and is widely used as a way to benchmark state-of-the-art methods. We also have results on enwik8 and wikitext-103 language modeling which, while certainly not large by GPT-3 standards, has been considered a standard language modeling benchmark in the literature. What tasks would you like to see?\\n\\n* Thanks for commenting that the results look promising. We choose our axes based upon the standards in other work on efficiency in neural networks. For example: EfficientNet (M. Tan, et al 2019) show results as FLOPs/parameters vs top-1 accuracy. Similarly, MobileNet (A.G. Howard et al 2017) and MobileNet v2 (M. Sandler et al 2018) also present the same axes. Many pruning papers also use these same axes, for example, \\u201cWhat is the State of Neural Network Pruning?\\u201d from D. Blalock et al 2020 and \\u201cThe State of Sparsity in Deep Neural Networks\\u201d from T. Gale et al 2019. We thank the reviewer for commenting that they found some of the methodology unclear -- are there certain aspects that you found to be particularly confusing? \\n\\n* We feel there are a series of important contributions in the work:\\n a.) We find some complexities from prior works are not needed. For example, we do not need special initializations for good performance. \\nb.) Clear demonstration of which algebras are more efficient both in terms of parameters and FLOPs in a modern regime across multiple domains. \\nc.) We discover that M_2R is better than all algebras that have been previously considered in terms of performance per FLOP while still offering a substantial parameter reduction. \\nd.) Showing that M_2R networks can be made sparse and will be better than normal sparsity due to the higher compute density of the algebra.\\n\\nWe hope that this helps address your concerns. Please do let us know if there is anything more we can clarify.\"}",
"{\"title\": \"We feel there are many new insights and contributions.\", \"comment\": \"Deep Complex Networks is an interesting paper that highlighted some of the potential of investigating these alternate algebras. However, they only investigate a single algebra (complex numbers) and do not recognize the increased compute density of algebra nor explore pruning or sparsity inducing methods that would greatly benefit from this increased compute density on modern hardware. Additionally, while their proposal is parameter efficient, it is not FLOP efficient due to the computationally expensive whitening procedure.\\n\\nWe test a large number of algebras and find an algebra (2x2 matrix rings) that actually work better than anything that has previously been looked at, in terms of performance per FLOP. Additionally, we show that we do not need some of the complexities discussed in earlier work exploring these algebras: specific initialisation schemes, for example, do not seem to matter as much.\\n\\nLastly, we find some crucial differences in terms of the efficacy of these algebras in testing at scale. Using ImageNet instead of CIFAR-10 one does not recover the same performance per-parameter. To further test this regime, we also use the more computationally efficient MobileNet. Finally, we test the most promising algebras on a variety of different domains as well.\\n\\nDeep Complex Networks was an exciting work but we think we have made a series of new contributions that are important to anyone interested in complex networks or other algebras.\"}",
"{\"title\": \"A concern of any new insights over \\\"Deep Complex Networks\\\" ICLR 2018.\", \"comment\": \"Dear Reviewer,\", \"i_have_a_concern\": \"whether this work provides any new insights over \\\"Deep Complex Networks\\\" ICLR 2018.\", \"https\": \"//openreview.net/forum?id=H1T2hmZAb\\n\\n I am not sure whether there is enough value to support this work appear in a top AI conference. Would like to hear your opinions.\"}",
"{\"title\": \"Huge title without convincing contribution\", \"review\": \"In this paper, the authors propose the usage of complex numbers in deep neural networks. Would be good to know that complex numbers, n x n matrices, quaternions, diagonal matrices, etc. all can be used in neural networks. The authors also claims benchmark performance in large-scale image classification and language modeling.\\n\\nHowever, this work cannot be appreciated due to the following aspects:\\n1. A first question is \\\"Why it is necessary?\\\" Interestingly, the authors already included Section 2.1 Why Algebras? However, I am not convinced at all. A good answer may take either of the two forms: A). simply a math step showing great potential behind; 2) large-scale neural networks that have engineering advantages. It seems that the authors took the second approach, however, ImageNet is not that challenging and there may be no clear need to switch to complex numbers. Would the authors be able to justify this?\\n\\n2. Then, the authors directly go to evaluations. The figures seem to show good advantages. However, could you please justify your x,y-axis? The reported results look high biased. As a reviewer, I have to doubt that the authors may have selectively present their results. \\n\\n A good research paper on such a big topic, should give clear methodology first, right? If the methodology is questionable, such good results may become noise to the community.\\n I would hope the authors clarify their methodology, and then present that advantages obtained in the experiments.\\n\\n3. As a top AI conference, I believe that we are looking for intellectual contributions.\\n This paper is working on a huge title, which is attractive. However, when I try to identify the intellectual contributions (can be theory, algorithm, engineering, applications), I am not convinced at all. I know such a topic is not easy to handle. I would simple ask the authors to respond to a direct question: how would like the community to appreciate your work?\", \"note\": \"a lot of disputes are around \\\"the huge title 'AlgebraNets'\\\". However, I did not receive justification response from the authors. A possible reason may be the authors are not aware of how big the topic it is, and were so attractive/confident in the current experimental improvements (which is also very appreciated).\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Impactful paper with strong empirical results\", \"review\": [\"## Summary\", \"The authors propose AlgebraNets - a previously explored approach to replace real-valued algebra in deep learning models with other associative algebras that include 2x2 matrices over real and complex numbers. They provide a comprehensive overview of prior methods in this direction and motivate their work with potential for both parameter and computational efficiency, and suggest that the latter is typically overlooked in prior literature. The paper is very well-written and follows a nice narrative, and the claims are mostly backed empirically with experimental results.\", \"## Pros\", \"Empirically justified with experiments on state-of-the-art benchmarks in both computer vision and NLP.\", \"Establishes that exploring other algebras is not just an exercise for mathematical curiosity but also practical, and encourages deep learning practitioners to extend the results.\", \"Perhaps the most useful aspect is that the experiments fit well into a standard deep learning framework \\u2013 with conventional operations, initialization, etc. That is, the algebras do not require significant custom ops/modifications to achieve state-of-the-art results.\", \"Shows multiplicative efficiency (parameter count and FLOPs) in many cases\", \"## Cons\", \"The authors motivate this work with computational efficiency; however, I did not find any discussion or comments on the total memory footprint. Do any of the algebras require us to keep track of partial computations/intermediate steps - subsequently increasing the total memory footprint? In the case of vision examples, which are dominated by the activations, what are the implications? If the memory footprint is indeed not consistent with a real-valued algebra, then are we trading model/input size for fewer parameters/efficient computation?\", \"An intuitive justification of the algebras used in these experiments, along with insight for future algebras might be a nice addition, although I wouldn't consider it a con.\", \"Are certain algebras more amenable to specific hardware architectures? If so, a brief discussion would enhance the paper overall.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting study of replacing the traditional real-valued algebra with other associative algebras\", \"review\": \"The paper proposes an interesting kind of networks, AlgebraNets, which is a general paradigm of replacing the commonly used real-valued algebra with other associative algebras. This paper considers C, H, M2(R) (the set of 2 \\u00d7 2 real-valued matrices), M2(C), M3(R), M4(R), dual numbers, and the R3 cross product, and investigates the sparsity within AlgebraNets.\\n\\nThe work in the paper is interesting and this paper is generally written well. However, there are a few issues/comments with the work:\\n\\n1.The citation of the references in the main body of this paper is not easy to read. It will be better to replace the format \\u201cauthor(s) (year)\\u201d with the format \\u201c(author(s), year)\\u201d ;\\n\\n2.Some figures and tables do not appear near the discussion, for example, Figure 1 is shown on Page but it is discussed until page 5, which makes it difficult to read;\\n\\n3.In Figure 1, the subfigure in the second row and first column, it seems that the performance of model with H and whitening the best stable performance. The subfigure in the second row and second column, it can be seen that the model with H is not better than the baseline model;\\n\\n4.There are many inconsistencies in the format of the reference, for example,\\n\\n1)In some places the author's name is abbreviated, while in others it is not. References \\u201cC. J. Gaudet and A. S. Maida. Deep quaternion networks. In 2018 International Joint Conference on Neural Networks (IJCNN), pages 1\\u20138, 2018. \\u201d and \\u201cGeoffrey E. Hinton, Sara Sabour, and Nicholas Frosst. Matrix capsules with em routing. In ICLR, 2018. \\u201d;\\n\\n2)In some places the conference\\u2019s name is abbreviated with the link, while in others it is not. References \\u201cSiddhant M. Jayakumar, Wojciech M. Czarnecki, Jacob Menick, Jonathan Schwarz, Jack Rae, Simon Osindero, Yee Whye Teh, Tim Harley, and Razvan Pascanu. Multiplicative interactions and where to find them. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum? id=rylnK6VtDH.\\u201d and \\u201cGeoffrey E. Hinton, Sara Sabour, and Nicholas Frosst. Matrix capsules with em routing. In ICLR, 2018. \\u201d.\\n\\nPlease check carefully and correct the inconsistencies.\\n\\n+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\\n\\nThe paper replaces the traditional real-valued algebra with other associative algebras and shows its parameter and FLOP efficiency. In the beginning, \\\"I think it is an interesting piece of work, and it may be helpful to develop the basic structural design of neural networks. \\\". However, after getting the response from the author(s), I more doubt the significance of the work in this paper: although many types of models have been proposed in this paper, the improvement over the baseline models is limited. I did not lower the grade on this paper since I thought it would be interesting and important (if effective) to extend the traditional real number field to more complex algebraic structures.\\n\\n+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}"
]
} |
hBxSksqPuOg | Random Network Distillation as a Diversity Metric for Both Image and Text Generation | [
"Liam H Fowl",
"Micah Goldblum",
"Arjun Gupta",
"Amr Sharaf",
"Tom Goldstein"
] | Generative models are increasingly able to produce remarkably high quality images and text. The community has developed numerous evaluation metrics for comparing generative models. However, these metrics do not effectively quantify data diversity. We develop a new diversity metric that can readily be applied to data, both synthetic and natural, of any type. Our method employs random network distillation, a technique introduced in reinforcement learning. We validate and deploy this metric on both images and text. We further explore diversity in few-shot image generation, a setting which was previously difficult to evaluate. | [
"GAN",
"NLP",
"ImageNet",
"generative",
"diversity",
"VAE",
"CelebA",
"language model"
] | Reject | https://openreview.net/pdf?id=hBxSksqPuOg | https://openreview.net/forum?id=hBxSksqPuOg | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"psfotMYP3Ne",
"kT6xEXbwgr5",
"9kD2QYqKyso",
"EFgoDmxP_5V",
"IZxG0w1GmI",
"5Hbmf_yAi16",
"ev7FuUmLZj8",
"oS68VEzJwOI",
"PGKJAwvCBn",
"JWvuZCMVgyE",
"Kzm-q0M2i59",
"iCM76vG6yV"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040393542,
1606247158361,
1606247025774,
1606246873836,
1606246773913,
1606246664283,
1606246212940,
1604512288142,
1604339681608,
1603914250808,
1603904176557,
1603796132311
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3481/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3481/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3481/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3481/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3481/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3481/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3481/AnonReviewer5"
],
[
"ICLR.cc/2021/Conference/Paper3481/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3481/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3481/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3481/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"The paper proposes the generalization performance of distillation from random networks as a metric of diversity, named RND. Intuitively, the more diverse the generated datasets, the more difficult it should be for a model to learn a random computation. The reviewers agree that the metric has a novel perspective. Unfortunately, the paper is not sufficiently developed to be accepted at this point. It is currently missing a number of experiments that would demonstrate that this metric is indeed a measure of diversity:\\n\\n1.) RND shows sensitivity to the truncation trick in GANs (for images), and limiting the size of vocabulary in text, but does not show sensitivity to any other changes in diversity (such as human judgment of diversity)\\n2.) It does not compare to previous metrics of diversity, of which there are many\\n3.) How sensitive is RND to architecture choice.\\n4.) It is non-obvious to what extent the metric is sensitive to image/text quality\\n\\nStrong metrics should demonstrate lack of \\\"failure modes\\\", as the utility of a metric is its inability to be gamed. Currently, the paper does not demonstrate this property, though I imagine that more work will help clear up the strengths and weaknesses of the metric. As a result, I can only recommend rejection.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"1) Regarding references to related work: we focus on FID because of its popularity. We also do not include comparisons to a number of other proposed diversity metrics precisely because a selling point for our metric is it can be calculated in settings where other metrics, like FID, cannot. However, for our truncation experiments, we agree that it would be helpful (for validation) to have comparisons to other established diversity metrics, and thus we have included measurement numbers and citations for intra-class FID, and improved recall (see general comments). Finally, we would like to note that our criticisms of FID apply to the metrics you cite as they all essentially aim to estimate likelihood of real data under the generated distribution, and thus require access to ground truth data, and are maximally diverse when a generative model simply memorizes the training data. \\n\\n2) Regarding the computation time of RND: One can speed up RND by decreasing the number of runs, which will in turn increase the variance of the score. We would also like to note that calculating intra-class FID on a large dataset like ImageNet is also quite time consuming, and is often estimated with smaller sample sizes. \\n\\n3) Regarding the convincingness of our experiments: we agree diversity is independent of perceptual quality. However, we tease apart the effects of diversity on the RND score in the truncation experiments, where RND matches the known human evaluation of increased diversity, as well as in the noise experiments in Appendix A4.\"}",
"{\"title\": \"Response to Reviewer 4\", \"comment\": \"1) As to the original contribution of the paper: while RND exists as an RL bonus, we adapt this into a metric by motivating and evaluating the normalized generalization gap between seen and unseen data - a measurement and distinction that are both novel contributions. Furthermore, we make several practical contributions as we are the first to propose and experiment using a stable metric for diversity on few-shot image generation, as well as natural data.\\n\\n2) Regarding comparison to other metrics: we do not include baseline comparisons in many of our experiments exactly because a selling point for our metric is it can be calculated in settings where other metrics, like FID, cannot. However, for our truncation experiments, we agree that it would be helpful (for validation) to have comparisons to other established diversity metrics, and thus we have included measurement numbers for intra-class FID, and recall (see general comments). \\n\\n3) Regarding The claim that RND captures semantic diversity: we would like to clarify that the images are normalized to have mean 0, std 1 in each channel. For our noise experiments, the noise generated has these same statistics for a controlled comparison.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"1) Concerning the definition of diversity - we agree that memorizing a very diverse dataset produces a diverse output. Our claim was simply that memorizing a dataset doesn\\u2019t automatically yield diversity. Diversity should not be upper bounded by memorizing a particular dataset, as is the case when measuring diversity with intra-class FID. While CIFAR-10 is diverse, and thus a GAN which memorizes CIFAR-10 produces diverse data, this dataset is not maximally diverse, and we want a measure which recognizes when a GAN produces even more diverse data.\\n\\n2) We do include an analysis of architecture choice on the RND score in Appendix A4. We find that comparisons of diversity under our metric are stable across such changes. Furthermore, we would like to point out that other well accepted diversity metrics, like FID, also require architecture choices that may affect the measurement.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"1) Concerning the \\u201carchitectural prior\\u201d and training procedure, while you are correct that there are obvious failure cases (i.e. when the target network identically maps to 0), our experiments in Appendix A4 suggest that diversity comparisons made using the RND score are stable across architecture and training hyperparameter choices. Moreover, common initializations empirically seem to avoid potential degeneracies of the target network. Finally, we would like to note that other measures, such as FID and recall, also require architectural choices.\\n\\n2) Thank you for pointing out the lack of clarity in 3.1. Upon your suggestion, we have reworded this sentence accordingly.\"}",
"{\"title\": \"Response to Reviewer 5\", \"comment\": \"1) Simply put, the aim of RND is to measure the diversity of data. While there does exist theoretical work examining the utility of random features, we instead focus on empirical validation of our method. As for situations in which this metric will differ from other metrics, we would like to stress that the benefit of our metric is that it is more flexible than existing metrics. We do not include comparison/baseline numbers in many of our experiments because other diversity metrics often require access to ground-truth distributions in order to quantify diversity (see more below). \\n\\n2) We focus on FID due to its popularity. As for baseline numbers, we did not include them originally because a selling point for our metric is it can be calculated in settings where other metrics, like FID, cannot. However, for our truncation experiments, we agree that it would be helpful (for validation) to have comparisons to other established diversity metrics, and thus we have included measurement numbers for intra-class FID, and recall (see general comments). Nonetheless, our criticisms of FID apply to all of the metrics you mention, as they all require a ground truth distribution, and essentially seek to measure the likelihood of real data under the generated distribution. Our claim is not that these methods fail in the most obvious settings, but rather they are inflexible. In many of the experiments we run, we do not include a comparison to these existing measurements because they simply cannot be run on settings like few-shot image generation and the diversity of a natural image dataset. \\n\\n3) We do wish to match human assessments of diversity with RND, although \\\"interesting diversity\\\" remains somewhat nebulous, which is why we focus on desiderata like robustness to small amounts of noise. As for comparing our metric to human perception, to our knowledge, there do not exist substantial datasets on which human assessment of diversity has been thoroughly measured. We do however match human assessments of diversity in the ImageNet truncation experiments, where RND matches the perceived increase in diversity that comes with a larger truncation parameter.\"}",
"{\"title\": \"General Comments\", \"comment\": \"We thank the reviewers for their feedback. In response to a common concern raised about baseline comparisons to other metrics, we have included two new tables in the submission. These can be found in section 4.1. We would like to stress though that we do not include comparison numbers for many of the experiments we run because many existing metrics cannot be used to measure diversity in few-shot settings, or the diversity of natural data. We will also address each reviewer\\u2019s specific points below.\"}",
"{\"title\": \"Interesting diversity measure idea, insufficient comparisons to other approaches\", \"review\": \"This paper proposes that generalization performance of distillations of random networks can be used as a good metric for the diversity of a data set: as a data set gets more diverse, it should be harder to learn to mimic a random computation on that data set. After defining the metric, the paper compares it to FID in its ability to distinguish truncated GAN output, and applies it to compare different generative architectures and training settings, to compare different data sets such as imagenet classes, and to measure diversity of natural language model outputs.\\n\\nThe strength of the paper in its interesting viewpoint, that diversity can be viewed as the difficulty of a random learning task. The framework and concept is promising. It is good to see that, unlike FID, it detects the loss of diversity as a generator is truncated, without mixing the measurement with precision. And the comparisons of different models and data sets is interesting.\\n\\nHowever, in proposing that RND should be used as a diversity metric, the paper does not sufficiently compare the proposed method to previously proposed alternatives. The paper should establish that the metric is meaningful and useful. Three are three main issues.\\n\\n1. What is being measured by RND? Beyond just the operational definition of how the metric is collected? In what situations would measuring this quantity be expected to differ from other metrics, and what strength weaknesses do the proposed metric have compared to other methods? It seems possible that the idea of diversity-through-learning complexity could have an interesting theoretical definition, but the paper does not attempt a theoretical characterization of exactly what quantity would ideally be estimated by the RND procedure.\\n\\n2. How does it compare to previously proposed metrics? A comparison to FID is done, but there are many other approaches for measuring diversity. E.g., see [Borji 2018] for a survey of a large number of alternative metrics for diversity, many of which are designed to measure recall of a generative model compared to a known diverse ground truth. Or see [Sajjadi 2018] or even the Parzen windows of [Goodfellow 2014]. Since RND measures diversity without need for a ground truth, it could be argued that RND allows new measurements. But to establish the metric is sound, it should first be compared to a variety of recall metrics where a ground truth is known. Comparisons should be done with different types of recall measures, and also different types of data distributions including toy examples where differences in diversity are easy to understand.\\n\\n3. It is a goal for RND to match human assessments of diversity? It is argued that some complexity (such as noise) is uninteresting and should not be included in a diversity metric - but this seems to imply that the goal is to measure just the diversity that would be interesting or informative to a human. If this is the goal, then the performance of RND should be compared to human assessments of diversity.\\n\\nThe idea and the topic of the paper is interesting and strong, but needs further development to argue that the proposed measure is meaningful and useful, especially compared to other approaches.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Review and Questions\", \"review\": \"RND as a Diversity Metric\\nThis paper proposes a new modality-independent diversity measure for generative models and examines this across image and natural language generation. The idea repurposes an exploration technique from reinforcement learning: random network distillation. The method produces a diversity score of data by splitting it into train and validation partitions. Then, a predictor network is trained via a mean-square error (MSE) to predict the resulting features of passing the train data through a fixed, randomly initialized target network. The diversity measure is then computed as the normalized MSE difference between the train and validation partitions. The intuition of the method is that if the train partition is diverse, then we would expect the predictor network to generalize well in predicting validation target features. If it\\u2019s not diverse, we would expect a large gap.\\n\\nI recommend acceptance. This appears to be a useful advance as a diversity measure that works across different modalities. I\\u2019m not aware of prior work doing this. The largest weakness, IMO, is that the work doesn\\u2019t do enough study into the importance and nature of the random target network. I imagine this is a critical decision (e.g. don\\u2019t use a MLP for a vision target network) and, if the authors want this to be widely adopted by different communities, should provide further guidance on this.\", \"notes\": [\"This has obvious failure modes. For instance, if the target network was a 0-network (all inputs mapped uniformly to a 0-vector), this trivially fails as a diversity measure. This paper should address more details about the requisite nature of the target network. This is the biggest weakness of this paper and I would upgrade my score with a more thorough scientific investigation here. \\u201cThe exact architecture and training procedure depends on the setting. For example, we use a transformer architecture to evaluate text, and we use a ResNet architecture to evaluate images (Vaswani et al., 2017; He et al., 2016).\\u201d\", \"I enjoyed Section 2.1 \\u201cWhat do we want from a diversity metric?\\u201d. Capturing the notion of diversity, distinct from information-theoretic measures, is an important property.\"], \"questions_to_authors\": [\"What is the importance of the random network architecture? How does the \\u201carchitectural prior\\u201d impact the efficacy of your approach as a data diversity measure?\", \"Could you improve the clarity of Section 3.1? It reads as, \\u201cThe bonus is tied to how \\u201cnovel\\u201d an agent\\u2019s environment is - as measured by the distillation loss between a fixed, random target network, and a predictor network trained to mimic the features of the target network.\\u201d I found myself re-consulting the original paper to make sure I knew what was going on. I found Section 3.2 to be much clearer.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Not confident on if this paper proposes a good definition on diversity.\", \"review\": \"I understand the authors goal on developing a diversity metric and evaluate models diversity. However, the concept of diversity is related to multiple factors and I don't agree to define diversity independent of memorization: a good memorization of diverse data also contributes to the diversity of models and sometimes good memorization suggests high capacity of model thus may lead to high diversity.\\nIn the approach proposed, the memorization concept is implicitly wrapped into the size of the predictor model. But there is no analysis on the effect of the predictor model sizes on the diversity scores.\\nFor the experiment section, the evaluations are rather non-systematic, only a few categories' RND scores are shown. A table of overall performance will be good.\\nMeasuring diversity of models is an important task, while this paper provides some interesting discussion on defining the diversity and proposed method to measure it. but the definition needs a bit refinement and the the author failed to prove that the propose method is a systematic metric on the diversity of models (only showed specific categories is not enough).\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"An intuitive method for quantifying diversity, but the paper is missing baselines.\", \"review\": \"This paper applies random network distillation (RND) as a method for quantifying how diverse samples from a generative model are. Samples from the generative model (or any dataset) are used to train a neural network to mimic a randomly initialized network. Intuitively, this is a more difficult task on a more diverse dataset, and so the distillation loss can be interpreted as a measure of diversity. The authors argue that this approach has advantages over other diversity metrics because it can capture semantic diversity and does not require a second reference dataset.\", \"strengths_of_paper\": [\"This article is well-written. The motivation and approach are very clear.\", \"The technique is demonstrated in several different domains, including image generation, text generation, and one-shot GANs.\", \"The approach is intuitive and agrees with qualitative notions of diversity across each domain it was tested in.\"], \"weaknesses_of_paper\": [\"The original contribution is minimal as RND distillations loss is a known technique for quantifying exploration. The main originality comes from identifying it as a way to also quantify diversity in generative dataset.\", \"The distillation loss metric is not compared to other diversity metrics. This would help demonstrate that the RND score is better aligned with diversity than other standard metrics.\", \"The claim that the RND score captures semantic diversity is not well supported. This deserves some scrutiny as the RND is a random feature detector, so it is not clear why it will generally favor semantic diversity. There is an experiment in the appendix to show that the RND score was greater for natural images than random ones, but it is unclear whether other statistics of the random noise were controlled to make this a fair comparison. This should be expanded to determine that it generalizes.\", \"Overall, while the paper has some merits, it needs to compare its metrics to other available ones to better make its argument.\"], \"comment\": \"The NLP model needs to be initialized with real text. It would be interesting to dive deeper into how context affects the diversity of the generated text.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting read, but not ready for publication\", \"review\": \"In this paper, the authors introduce a new quantitative diversity measure advocating its usage for generative models evaluation. In a nutshell, to measure the diversity of a particular set, the authors split it into disjoint train/val subsets and learn a DNN to predict the outputs of another randomly initialized DNN on the train set. Then the generalization gap of the trained DNN is computed on the unseen val subset, and the normalized value of this gap (averaged over several splits/initializations) is considered as a diversity measure.\", \"pros\": \"(1) The authors tackle an important problem since the established measures like FID are known to sacrifice diversity in favor of perceptual quality.\\n\\n(2) The proposed measure is novel, the usage of random networks in a new context sounds interesting.\", \"cons\": \"(1) The authors do not relate their measure to the very relevant line of existing works on measuring diversity via Precision/Recall:\\n\\nAssessing Generative Models via Precision and Recall. NeurIPS 2018\\n\\nImproved Precision and Recall Metric for Assessing Generative Models. NeurIPS 2019\\n\\nReliable Fidelity and Diversity Metrics for Generative Models, ICML 2020\\n\\nWithout explicit highlighting of RND's advantages over the Recall/Coverage measures, I cannot recommend to accept the paper.\\n\\n(2) The computation of RND requires several DNN trainings, which is time-consuming. This makes RND inconvenient for broad usage, and almost impossible to use in day-to-day research, e.g., for monitoring the training progress.\\n\\n(3) I am not quite convinced by the experiments, which support the RND applicability. For me, the most sensible experiment is in section 4.1, which shows that aggressive truncations decrease RND. Sections 4.2/4.5 show that newer GAN models typically achieve higher RND compared to older ones, but I cannot consider this as strong evidence, since we do not know if the advantage of newer models comes from diversity rather than perceptual quality.\\n\\nOverall, my current recommendation is (4), mostly because of missing a crucial part of related work and unconvincing experiments.\\n\\n::::::Post-Rebuttal update::::::\\n\\nAfter reading the new revision, I decided to keep my initial score. I do not consider the need of groundtruth real data for metric computation as a strong disadvantage. The authors report some numbers on Recall in Table 1 but it only shows that Recall is consistent with RND, being much cheaper to compute. Therefore, I do not see any reason to prefer RND over established diversity metrics.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
qYZD-AO1Vn | Differentiable Trust Region Layers for Deep Reinforcement Learning | [
"Fabian Otto",
"Philipp Becker",
"Vien Anh Ngo",
"Hanna Carolin Maria Ziesche",
"Gerhard Neumann"
] | Trust region methods are a popular tool in reinforcement learning as they yield robust policy updates in continuous and discrete action spaces. However, enforcing such trust regions in deep reinforcement learning is difficult. Hence, many approaches, such as Trust Region Policy Optimization (TRPO) and Proximal Policy Optimization (PPO), are based on approximations. Due to those approximations, they violate the constraints or fail to find the optimal solution within the trust region. Moreover, they are difficult to implement, often lack sufficient exploration, and have been shown to depend on seemingly unrelated implementation choices. In this work, we propose differentiable neural network layers to enforce trust regions for deep Gaussian policies via closed-form projections. Unlike existing methods, those layers formalize trust regions for each state individually and can complement existing reinforcement learning algorithms. We derive trust region projections based on the Kullback-Leibler divergence, the Wasserstein L2 distance, and the Frobenius norm for Gaussian distributions. We empirically demonstrate that those projection layers achieve similar or better results than existing methods while being almost agnostic to specific implementation choices. The code is available at https://git.io/Jthb0.
| [
"reinforcement learning",
"trust region",
"policy gradient",
"projection",
"Wasserstein distance",
"Kullback-Leibler divergence",
"Frobenius norm"
] | Accept (Poster) | https://openreview.net/pdf?id=qYZD-AO1Vn | https://openreview.net/forum?id=qYZD-AO1Vn | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"EsM4dbXVtaJ",
"jHHIW98v91c",
"JufL6_KeIG",
"rsL4gMRmLpM",
"WkW-_s2HOY6",
"jRTDZmk40e2",
"7cISTCnA_nN",
"V3CwMa0VC0",
"ur5441RNZCp",
"suxwxgbX1E2",
"mdNxXncSovO",
"p0pC5D-Txk",
"Tpb9CKMCqsA"
],
"note_type": [
"decision",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040440378,
1607109613911,
1606246101886,
1606003772030,
1605896801685,
1605896753313,
1605896601996,
1605896555859,
1605896369558,
1605895975357,
1604276437291,
1603942829276,
1603882848527
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3480/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3480/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3480/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3480/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3480/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3480/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3480/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3480/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3480/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3480/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3480/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3480/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Poster)\", \"comment\": \"This paper proposes a differentiable trust region based on closed-form projects for deep reinforcement learning. The update is derived for three types of trust regions: KL divergence, Wasserstein L2 distance, and Frobenius norm, applied to PPO and PAPI, and shown to perform comparably to the original algorithms.\\n\\nWhile empirically the proposed solutions does not bring clear benefits in terms of performance, as correctly acknowledged by the authors, it is rigorously derived and carefully described, bringing valuable insights and new tools to the deep RL toolbox. The authors improved the initial submission substantially based on the reviews during the discussion period, and the reviewers generally agree that the work is of sufficient quality that merits publication. To improve the paper and its impact, I would recommend applying the method to also off-policy algorithms for completeness. Overall, I recommend accepting this submission.\"}",
"{\"title\": \"Belated Review (Not Considered as an Official Review in the Final Decision)\", \"review\": \"I\\u2019m terribly sorry but I noticed that somehow my review of this paper was not successfully submitted as I checked my submission tasks. After double checking with the area chair, we decide to add my review here. Notice that this is **only for the authors' reference and to provide some additional feedback for potential improvement of the paper in the future**, but it is **not considered as an official review in the final decision process**. Please find both the original review and the comments on the revised paper below.\\n\\n### [Original Review]\\n\\nThis paper considers trust region methods in deep reinforcement learning (DRL) and proposes differentiable trust region layers, a type of differentiable neural network layers to enforce state-wise trust regions exactly for deep Gaussian policies via closed-form projections. The proposed approach is flexible and general, and can in particular handle different trust regions (in KL, W2 and Frobenius norms, for example) and can be applied to existing RL algorithms as a complementary component. Empirical results show that the proposed trust region layers (together with entropy projection to encourage exploration) help PPO achieve similar or better results, and are less dependent on implementation/code-level optimization.\\n\\nIn general, the paper is relatively well-written and discusses about a novel and clean approach for solving the problem of enforcing trust region constraints in trust region DRL algorithms. However, the following issues should be noted and addressed:\\n1. On page 2, the authors mention that \\u201cAdditionally, the projection is not directly part of the policy optimization but applied afterwards, which can result in suboptimal policies\\u201d. But since the subproblems of TRPO/PPO are just (first-order) approximations to the original RL problem, the meaning of \\u201csuboptimal\\u201d here is unclear. \\n2. On a related point, I\\u2019m wondering what would happen if the authors instead use the projection methods in this paper not as a layer but just as a post-processing step after each standard TRPO/PPO update. The authors should compare this approach with the proposed one (Section 4.4), as this is at least a natural and closely related benchmark (and may even perform better in practice, which is currently unclear without the numerical comparisons). Also, the authors mention that \\u201cfor successive policy updates the projections become rather impractical\\u201d. However, it is not clear to me why it is impractical to use the projection as a post-processing step as mentioned above. \\n3. Again, on a related point, in Section 4.4, it is unclear which policy is eventually adopted in execution. Is it the $\\\\pi_{\\\\theta}$ (before projection) or the projected policy $\\\\tilde{\\\\pi}$?\\n4. Why is it that important to enforce the trust regions exactly? It seems that the major reason provided in this work for focusing on this problem is that exact trust region constraints will lead to some performance improvement with less code-level optimization. However, in general, the empirical performance improvement shown in this paper is not very significant. In fact, for Table 1, I don\\u2019t think the two criteria (\\u201cfirst\\u201d and \\u201clast\\u201d) are informative enough to characterize the overall performance, as it seems that the curves are crossing each other very frequently (in Figures 2 and 4), and so it would make a lot of difference to consider the last 20 epochs, 10 epochs, or just 5 epochs and so on. So it might be better to directly look at the curves. However, from Figures 2 and 4, it seems that with only the trust region layers, the performance improvement is not very obvious. It is only with the additional entropy projection that the performance becomes obviously better (Figure 2). Hence I think the authors should also include comparisons with the standard PPO/TRPO + entropy projection. Otherwise, it is not clear whether the entropy projection is central or the trust region layers proposed in this paper are central. \\n5. On a related point, can the authors provide any reasoning about why it is important to enforce the trust region constraints exactly from a theoretical viewpoint?\\n6. Why is it important to avoid code-level optimization? If I understand correctly, code-level optimization are just some tricks commonly adopted in TRPO/PPO methods, as pointed out in (Engstrom et al., 2020). Then why is it a big issue to need code-level optimization?\", \"there_are_also_some_slightly_more_minor_issues\": \"1. In the abstract, the authors mention that existing trust region DRL methods \\u201clack sufficient exploration\\u201d. However, as pointed out later in the paper, existing trust region DRL methods like PAPI have proposed to use entropy projection to encourage exploration, and so this claim is not very accurate. \\n2. Again in the abstract, the authors claim that the proposed differentiable trust region layers can complement existing RL algorithms. However, it seems that the authors only applied these layers to the PPO algorithm. Can the proposed layers also be applied to other RL algorithms (beyond PPO/TRPO)? \\n3. On page 3, in the definition of the Gaussian policies, the authors may want to make it clearer that $\\\\mu$ and $\\\\Sigma$ are parametrized by $\\\\theta$ (if it\\u2019s the case). Otherwise, it may appear that $\\\\theta$ is simply the concatenation of $\\\\mu$ and $\\\\Sigma$, which would rule out the deep neural network parametrization. \\n4. On page 4, at the end of Section 3, it would be better to explain why only the metric for $\\\\mu$ is scaled by $\\\\Sigma_2^{-1}$, while the metric for $\\\\Sigma$ is not. It may also be helpful to consider the alternative Frobenius norm with the second term replaced by ${\\\\rm tr}(\\\\Sigma_2-\\\\Sigma_1)^T\\\\Sigma_2^{-1}(\\\\Sigma_2-\\\\Sigma_1)$ and numerically test and compare the performance. \\n5. What is the \\u201centropy projection on its own\\u201d approach? Is it just adding entropy projection on top of (1) without trust region constraints?\\n6. There seem to be some inconsistencies between the tables and the plots. In particular, for Humanoid-v2, Table 1 shows that KL performs the best in terms of the \\u201clast\\u201d criteria, but from the center plot of Figure 2, KL seems to be one of the worst in the last epochs. The authors should double check to make sure that there are no such kind of inconsistencies.\\n\\n### [Comments on the Revised Paper, Rating and Confidence]\\n\\nThe revised version now contains a much clearer description of how the layers are integrated into the algorithm, fixes several typos and reorganizes (and enriches some details of) the numerical experiments following comments of the other reviewers. \\n\\nHowever, most of my major concerns above still remain (which is expected as the authors didn\\u2019t get a chance to see my review, and I sincerely apologize for this). \\n1. For example, although the authors now provide some more detailed explanations about \\u201cimpractical projections\\u201d in Section 4.4, it is still unclear why one cannot use the projection as a post-processing step instead of a layer, and what the authors are trying to convey in the more detailed discussion about impractical projections with a growing storage of previous policy networks in the revised draft here. \\n2. Also, the authors may want to clarify some new terminologies and notation introduced in the revision. For example, are \\u201ccontextual policies\\u201d just policies with state-dependent covariances? And what is the index $t$ in the Adam updates in Algorithm 2, and should $a$ and $s$ also be $a_t$ and $s_t$ here? \\n3. Another issue I noticed is that compared with the revised draft, the results (in terms of which method is optimal, and whether or not the proposed layers improve over PPO/PAPI) shown in the plots and the tables are not very stable, which indicate that different runs give pretty different results. Such kind of instability may also be relevant to the inconsistency between Table 1 and Figure 2 in the original draft mentioned in the original review above. \\n\\nOverall, I decide to maintain my original rating.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Thank you for the update\", \"comment\": \"Thank you again for reviewing our revised work. We appreciate the update and are glad we could address your concerns.\"}",
"{\"title\": \"Response to response\", \"comment\": \"Thank you! I think the authors did a good job of improving the paper and addressing my concerns. I will consider updating the score after a discussion with the other reviewers/AC.\"}",
"{\"title\": \"Author answer (Part 2)\", \"comment\": \"### Differentiable Projection Layer and Successive Policy Updates and RL Algorithm for Optimisation\", \"to_give_some_more_details_of_our_algorithm\": [\"we use a simple policy gradient of the importance weighted advantage (see Eq. 12). We use Adam to perform this optimization. As the output layer of our policy is the trust region layer, the optimized policy has to stay close to the behavior policy and, therefore, the maximization of the loss function is robust and stable. The policy gradient takes the projection into account as we also differentiate through the output layer to obtain the policy gradient. Hence, the algorithm is a standard policy gradient, but the architecture of the policy is extended by the trust region layer to enforce stable optimizations. We clarified that in the paper and also added a pseudo code description to Appendix A with some additional information about the successive policy update. By \\\"successive policy updates\\\" we mean the iterations of the reinforcement learning algorithm, i.e. a single policy update obtains policy $\\\\pi^i$ from $\\\\pi\\\\_{\\\\textrm{old}} = \\\\pi^{i-1}$.\", \"### Approximations in PPO and our Method\", \"One of the key differences to PPO and TRPO is that our projections do not approximate the computation of the trust region itself.\", \"By leveraging the trust region projection as final layer in the policy, we always sample from the ``correct'' Gaussian distribution that is within the trust region. Our approximation only takes place after the final mini-batch update to ensure the new behavior policy for the next epoch is satisfying the trust region without the projection layer. This is enforced by the regression loss introduced in Eq. 12. While this approximation is not necessary from a theoretical point of view, it is a practical requirement. Otherwise, we would need to store all previous policies and compute them for one call of the policy $\\\\pi^i$ as the trust region for $\\\\pi^i$ would depend on $\\\\pi^{i-1}$ while the trust region for $\\\\pi^{i-1}$ depends on $\\\\pi^{i-2}$, and so on. PPO or TRPO never compute the optimal solution for the trust region but use approximations to obtain it. In contrast, we compute the optimal solution and use a regression loss such that the output of the network without trust region layer is close to the optimal solution. As mentioned above, we also included a comparison of the policy change for all algorithms (see Fig. 2). While PPO is able to limit the change with its code level optimizations approximately, our projections present a much more consistent change to policy.\", \"### About Additional Feedback\", \"\\\"Mention projections in the introduction\\\": We revised section Introduction (2nd paragraph) to mention the projections here.\", \"We have revised the MDP formulation and the expected return to incorporate an initial state distribution.\", \"Equations 3 and 4: Both the objective and the constraints are functions of the same optimizing variables. We have elaborated the text below them to explicitly define the optimizing variables.\", \"We revised the text in Section 5 to correct typos, reference for the results in Section 4, and the description how we use different seeds, e.g. captions in Fig.4 and 5, etc.\", \"We hope this alleviates the remaining concerns and again thank the reviewer for their time and feedback.\", \"#### References\", \"Abdolmaleki, A., Price, B., Lau, N., Reis, L. P., \\\\& Neumann, G. (2017). Deriving and improving CMA-ES with information geometric trust regions. In Proceedings of the Genetic and Evolutionary Computation Conference (Vol. 8, pp. 657\\u2013664).\", \"Abdolmaleki, A., Springenberg, T., Tassa, Y., Munos, R., Heess, N., \\\\& Riedmiller, M. (2018). Maximum a Posteriori Policy Optimisation. In International Conference on Learning Representations.\", \"Engstrom, L., Ilyas, A., Santurkar, S., Tsipras, D., Janoos, F., Rudolph, L., \\\\& Madry, A. (2020). Implementation Matters in Deep Policy Gradients: A Case Study on PPO and TRPO. In International Conference on Learning Representations.\", \"Song, H. F., Abdolmaleki, A., Springenberg, T., Clark, A., Soyer, H., Rae, J. W., Noury, Seb, Ahuja, Arun, Liu, Siqi, Tirumala, Dhruva, Heess, Nicolas, Belov, Dan, Riedmiller, Martin \\\\& Botvinick, M. M. (2020). V-MPO: on-policy maximum-a-posteriori policy optimization for discrete and continuous control. In International Conference on Learning Representations.\"]}",
"{\"title\": \"Author answer (Part 1)\", \"comment\": \"Thank you for you valuable and appreciated feedback.\\nWe would like to address your concerns point-by-point in the following.\\n\\n### Agnostic to Code-level Improvements\\n\\nWe complemented our studies with an additional baseline, PPO-M, which supports our claim of higher robustness to code-level optimizations. \\nPPO-M is leveraging the same minimal set of code-level improvements (for details we refer to the appendix) we used for our projections, however, it is trained with the clipped PPO loss. \\nPPO-M, which effectively has the same precondition as our projections, performs worse on all tasks besides on the Hopper-v2 were it achieves comparable performance. \\nAs Engstrom et al., 2020 already demonstrated, those code-level choices are key to the success of PPO, while our projections do not require them to achieve similar or better performance then the standard PPO. \\nConsequently, our projections present a more theoretically and mathematically sound approach to trust region policy optimization.\\nIn order to present a clearer picture of the performance differences, we now provide a tabular comparison for most experiments and provide the training curves in the appendix.\\nIn the light of this, all experiments have now been conducted with 40 seeds each instead of the previously used 10 to also improve the statistical significance.\\n\\n### Benefit of State-wise Trust Regions and Comparison vs. PPO\\n\\nTo demonstrate the effect of state-wise trust regions we further present a new reaching task that leverages the benefit of contextual covariances and imposes a much harder exploration problem.\\nIn this setting we can show that our state-wise trust regions are superior and can make better use of contextual information.\\nIn this environment the need for properly enforced trust regions becomes more apparent compared to the standard Mujoco benchmarks, which can be solve with rather small exploration. \\nWe found that PPO cannot properly make use of the contextual covariance parametrization and, further, moves from exploration to exploitation too quickly.\\nTo support our claim even further, we added more analysis regarding the actual change of the policy distribution for each algorithm to the paper. \\nOur findings here also align with our previous claim of robustness to code-level choices.\\nPPO-M is taking increasingly large steps, while standard PPO can limit the change in the policy distribution with its code level optimizations to some extend.\\nNevertheless, our projections demonstrate a much more consistent change to policy than all baseline methods. \\n\\nThe approximate trust region in combination with code-level choices may be sufficient for standard Mujoco benchmarks regrading performance, but already provides a poor trust region bound in this setting. \\nWhen more exploration is need, however, this approximation is not suitable. \\n\\n\\n### Approximate Trust Regions and RL as Inference\\n\\nThe RL as Inference framework and our approach are orthogonal.\\nNo trust-region constrains follow directly from the inference framework and our projection layers could be added to such approaches, as well as, approaches following from any other formulation.\\nThat said, constraining E- and M-Step in the EM-like algorithms following from the RL as inference framework is common. (V-MPO, Song et al., 2020, MPO, Abdolameki et al. 2018 , Abdolmaleki, et al 2017)\\n\\nRegarding Song et al., 2020 as well as the \\\"original\\\" MPO, (Abdolameki et al. 2018): Both constrain the policy update (M-Step) using an expected KL constraint which is opposed to our state-wise constraints. \\nSuch state-wise constraints allow for better control of the policy update.\\nAdditionally, they rely on an alternating optimization w.r.t. to the policy parameters and Lagrangian multipliers of the trust region constraints to solve the trust region problem.\\nThose trust region problems are solved analytically in our case which results in a much easier optimization as it avoids alternating updates. \\n\\nTo summarize, our approach provides a more general and simpler way of enforcing the trust regions constraints used in many approaches from the RL as inference setting and we believe combining our projections with those approaches is a great direction for further research.\"}",
"{\"title\": \"Author answer (Part 3)\", \"comment\": [\"### Remaining Corrections:\", \"We fixed the problem statement in Eq. 1 according to the reviewer's comment.\", \"We clarified the description of the KL as a divergence.\", \"We improved clarity and the explanation of the general projection problem in Eq. 3 and 4, especially with regard to the individual components involved.\", \"We added references to the detailed derivations in the appendix.\", \"We corrected all unsuitable statements, typos, and grammatical errors are corrected; and elaborated at confusing sentences according to the reviewer's detailed comments.\", \"We hope this alleviates the remaining concerns and again thank the reviewer for their time and detailed feedback.\", \"#### References\", \"Abdolmaleki, A., Lioutikov, R., Peters, J. R., Lau, N., Reis, L. P., \\\\& Neumann, G. (2015). Model-Based Relative Entropy Stochastic Search. In Neural Information Processing Systems (pp. 3537\\u20133545).\", \"Abdolmaleki, A., Price, B., Lau, N., Reis, L. P., \\\\& Neumann, G. (2017). Deriving and improving CMA-ES with information geometric trust regions. In Proceedings of the Genetic and Evolutionary Computation Conference (Vol. 8, pp. 657\\u2013664).\", \"Akiba, T., Sano, S., Yanase, T., Ohta, T., \\\\& Koyama, M. (2019). Optuna: A Next-generation Hyperparameter Optimization Framework. Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2623\\u20132631.\", \"Akrour, R., Pajarinen, J., Neumann, G., \\\\& Peters, J. (2019). Projections for Approximate Policy Iteration Algorithms. In Proceedings of the 36th International Conference on Machine Learning (pp. 181\\u2013190).\", \"Engstrom, L., Ilyas, A., Santurkar, S., Tsipras, D., Janoos, F., Rudolph, L., \\\\& Madry, A. (2020). Implementation Matters in Deep Policy Gradients: A Case Study on PPO and TRPO. In International Conference on Learning Representations, ICLR.\", \"Fujimoto, S., Van Hoof, H., \\\\& Meger, D. (2018). Addressing Function Approximation Error in Actor-Critic Methods. In 35th International Conference on Machine Learning, ICML (Vol. 4, pp. 2587\\u20132601).\", \"Pajarinen, J., Thai, H. L., Akrour, R., Peters, J., \\\\& Neumann, G. (2019). Compatible Natural Gradient Policy Search. Machine Learning, 108(8\\u20139), 1443\\u20131466.\"]}",
"{\"title\": \"Author answer (Part 2)\", \"comment\": \"### Maximum KL constraint and Asymptotic Convergence\\n\\nWe agree that this point should be stressed more. Therefore we now mention this important difference of our approach in comparison to previous works already in the introduction and again in the conclusion. \\n\\n### Trust Region Measures\\n\\nWe improved our description for the reverse KL divergence. \\nThe covariance part is indeed given by two components, the difference in scale by the log ratio of the determinants (which relates to the difference in entropy of both distributions) as well as the rotation difference by the trace term. \\nBoth is mentioned explicitly in the revision of the paper. \\n\\nThe metric space used for the Wasserstein distance relates to the sensitivity of the data-generating distribution, because the data is generated by the old distribution, i.e. sampled from a Gaussian distribution whose covariance matrix is given by $\\\\Sigma\\\\_{\\\\textrm{old}}$. Thus using $\\\\Sigma\\\\_{\\\\textrm{old}}$ to measure the distance between the old and the new distributions explicitly takes the generating distribution of the data into account.\\n\\n### Theoretical Considerations of Differentiable Trust-Region Layers\\n\\nWe agree that the dependence of Eq. 3 and 4 on the state might not be obvious, therefore, we keep the state dependence in these equations and only omit it afterwards. This should help the reader to get a better understanding of the dependence of the constraints and parameters on the state. \\n\\nConsidering the formulation of the problem of Eq. 3 and 4. While joint optimization is more general, the additional flexibility in treating $\\\\mu$ and $\\\\Sigma$ separately comes from the fact that we can introduce different constraints for the two parameters, thus allowing e.g. $\\\\Sigma$ to deviate more from its previous value than $\\\\mu$ and vice versa. This yields better results and is also common practice in black-box optimization (Abdolmaleki, et al., 2017).\\n\\nWe now go into more detail about the commutativity assumption for the general W2 projection as well as the corresponding requirement that the matrices have to be sufficiently close together. First, the simplified expression for the solution only holds for the case of commuting covariance matrices. For the special case of diagonal covariances, the commutativity always holds. In the general case, however, the simplified expression becomes an approximation, which is justified only when the covariance matrices are sufficiently similar, such that the error made by assuming commutativity is small. This assumption, i.e. the similarity of the two covariance matrices, is required by the bound on the covariance matrix in Eq. 8.\\n\\nLastly, the square root of the covariance matrix appears naturally in the expression for the solution of the projected covariance matrix (Eq. 9). Thus, parametrizing the algorithm in terms of the square root of the covariance matrix instead of its Cholesky factors leads to simpler computations and increased numerical stability. We extended the corresponding footnote, please have a look there. \\n\\n### Entropy Control\\n\\nFor the Gaussian policies considered in this work the entropy directly relates to the \\\"size\\\" of the covariance, which again relates to the amount of exploration in the on-policy setting in our work. Previous works (Abdolmaleki et al., 2015, Pajarinen et al., 2019, Akrour et al., 2019) have demonstrated that controlling the entropy, and, thus, the amount of exploration can yield to improved performance, especially when combined with trust regions. We decided to include it in our work as we saw similar effects for our approach and it can be easily incorporated in the trust region layers. We, however, tried to make this fact more apparent in our paper.\\n\\n### Successive Policy Updates\\n\\nWe added pseudo code to Appendix A with some additional information about the successive policy update and, moreover, improved the corresponding section 4.4.\", \"to_give_some_more_details_here\": \"The successive policy update is effectively the update of the behavior/old policy $\\\\pi\\\\_{\\\\text{old}}$ that is used for generating trajectories.\\nWhen using other on-policy methods, such as PPO, we naturally obtain this policy by choosing the most recent set of parameters for each epoch $i$, i.e. $\\\\pi\\\\_{\\\\text{old}}^{i} = \\\\pi^{i-1}$.\\nAt each epoch $i$, however, the policy $\\\\pi^i$ predicted by the network does not respect the constraints before the projection layer, thus it relies on calling this layer. Yet, the policy of the projection layer $\\\\tilde{\\\\pi}$ depends on the parameters of $\\\\pi^i$ and the old policy network $\\\\pi^i\\\\_\\\\textrm{old} = \\\\tilde{\\\\pi}^{i-1}$. This would result in an ever-growing stack of policy networks which would become increasingly costly to evaluate. In other words, $\\\\tilde{\\\\pi}^i$ is computed using all stored networks of $\\\\pi^i,\\\\pi^{i-1}, \\\\ldots,\\\\pi^0$. As a consequence, we need to encode the information of the projection layer into the parameters of $\\\\pi^i$, which is done by the regression penalty in Eq. 12.\"}",
"{\"title\": \"Author answer (Part 1)\", \"comment\": \"Thank you for your valuable feedback and the very detailed remarks. We incorporated the suggestions and hope this improves clarity. Additionally, we now included an algorithmic view of our approach in the Appendix A to clarify the actual algorithm used in the experiments. Aside from the minor corrections, we want to address in the following your specific question in more detail.\\n\\n### Performance and Significance of our Contribution\\n\\nAs there are some concerns from the reviewer regarding the tuning of PPO and the significance of our experimental results, we conducted additional experiments. We now use 40 seeds instead of 10 for all experiments in our paper. The new results show that we are able to perform better on all tasks, besides the Hopper-v2, where we achieve a comparable performance. However, we have to agree the differences are small and we do not massively outperform PPO on the Mujoco benchmarks. In the light of this, we want to politely point the reviewer to the Reviewer Guidelines stating that not clearly performing state-of-the-art is on its own not a reason for rejection (https://iclr.cc/Conferences/2021/ReviewerGuide\\\\\\\\#faq). While we do achieve state-of-the-art, we provide mathematically more principled trust region layers that are agnostic with respect to the reinforcement learning algorithm (e.g. they could also be used for actor critic methods) and do not rely on code-level optimizations. To support the claim of the latter, we added an additional baseline, PPO-M, that uses the PPO objective, but the same minimal implementation choices as we do. We show that in this setting PPO performs clearly worse than our projections. Additionally, we complemented the paper with an analysis of the actual change in the policy distribution. Unlike PPO and PAPI, the policy generated by our projections changes much more consistently and, in particular, PPO-M is taking increasingly large steps. \\n\\nTo emphasize the benefit of state-wise trust regions, we included one additional experiment for a reaching task that involves a much harder exploration problem than the standard Mujoco tasks. Here, we can clearly outperform PPO with and without code-level improvements. Moreover, in combination with state-dependent covariances our trust-regions benefit even more. PPO, on the other hand, cannot make use of it and its performance deteriorates. This example shows that, due to the state-individual trust regions, our approach comes with the promise to scaling to more complex policy structures and exploration problems. Yet, this advantage is not visible for the standard Mujoco benchmarks as the exploration problem in these examples seem to be too simple to exploit state-dependent covariances. \\n\\nHence, we believe that, as we achieve comparable performance than state-of-the-art and have beneficial properties, that the paper qualifies for acceptance even though we do not massively outperform PPO. \\n\\n### Hyperparameter Tuning of PPO\\n\\nTo also address the tuning of PPO, the PPO hyperparameters for the four basic Mujoco tasks have now also been tuned with Optuna (Akiba et. al., 2019).\\nHowever, the best set of parameters was on par with our previously used default, as a consequence, we left the parameters unchanged. \\nFor both the Humanoid-v2 as well as for our newly added Reacher experiment we found better performances by using Optuna. \\nWe think our initial parametrization for the other four tasks was already a sufficiently good benchmark of PPO's performance. \\nIn our experiments PPO performs equally well to the Spinning-Up implementation benchmarks (https://spinningup.openai.com/en/latest/spinningup/bench.html) and e.g. the performance in Fujimoto, et al., 2018. \\nFurther, the tuned parameters for the stable-baseline zoo (https://github.com/araffin/rl-baselines-zoo) are equivalent to ours and even used as a default for the new *stable-baselines3*.\", \"even_other_popular_implementations_such_as_https\": \"//github.com/ikostrikov/pytorch-a2c-ppo-acktr-gail leverage the same parameters as default.\\n\\n### Code-level optimizations\\n\\nWhen we speak of code-level optimizations we always refer to choices that are described by Engstrom, et al., 2020, but we agree that we needed to highlight this fact more. \\nRegarding the mathematical description of an exponential or linear decay in the related work section.\\nWe did refrain from adding this as we find that the related work part should emphasize the connection to other work and, therefore, avoid introducing mathematical notation if it is not essential for doing so. \\nFor more details on that we therefore refer to the algorithmic view of our approach in the Appendix A where we demonstrate how to apply the entropy control.\"}",
"{\"title\": \"Author answer\", \"comment\": \"Thank you for you valuable and appreciated feedback.\\nWe would like to address your concerns point-by-point in the following.\\n\\n### Why KL, Wasserstein, and Frobenius?\\n\\nWe agree that considering different metrics and divergences would be very interesting and instructive.\\nYet, the main problem with using arbitrary metrics and divergences is that the optimization problems resulting from such trust region formulations have, in general, no closed form solution. This prevents any practical realization of those approaches, which scales to problems of relevant size. The three projections we picked are special in the sense that efficient solutions exist. As shown, for the Wasserstein and Frobenius metrics closed form solutions can be derived. For the KL divergence, solving the optimization completely in closed form is impossible, but we can still obtain closed form solutions for the primal and only the dual needs to be solved numerically. As we have a fixed amount of constraints, i.e. $1$, this optimization is much easier than optimizing the primal, whose dimension scales linearly with the action dimension. This dual optimization allowed us to still implement it in efficient manner that scales to problems of relevant complexity.\\n\\n\\n### Preferred Choice of Projection\\n\\nTo provide some more insight for which scenarios each projection is preferred, we added some more analysis to the initial results and, further, provided an additional experiment with contextual covariances. Generally speaking, the Frobenius projection is the weakest out of all three projections. It tends to run into numerical problems when covariances are small at the end of the training, especially with contextual covariances. Tighter covariance bounds and higher weights for the regression penalty in the loss can, however, mitigate those effects. The KL performs well overall, similar to the W2, hence, if contextual covariances are not required, the KL is the best choice for most problems. As a bonus, it has all properties of existing KL-based trust region methods that have monotonic improvement guarantees. Nevertheless, for quick benchmarks with contextual covariances, the W2 is preferred, given it does not add any computational overhead as the KL does. More specifically, the contextual covariance for the KL requires about ten times more compute time than the Frobenius or W2 projection. This, however, is only for tight covariance bounds and we currently work to reduced that by improving the initialization of the dual variables. \\n\\n\\n### Entropy Control\\n\\nAlthough the entropy control mechanism discussed is not itself a contribution of this work, previous works have shown that entropy control can improve performance of RL algorithms, especially when combined with trust regions (Abdolmaleki et al., 2015, Pajarinen et al., 2019. Akrour et al., 2019). We decided to include it in our work as we saw similar effects for our approach and it can be easily incorporated in the trust region layers. We also believe that improved schemes of controlling the trust regions and entropies are a promising direction for further research. Yet, to address your concerns, we reformulated the corresponding parts to reduce the emphasis on the entropy control, especially in Section 4.\\n\\n### Remaining Corrections:\\n\\n- We made sure to be more precise with the terms metric and divergence throughout the whole work.\\n\\nWe hope this alleviates the remaining concerns and again thank the reviewer for their time and feedback.\\n\\n\\n#### References\\n\\n- Abdolmaleki, A., Lioutikov, R., Peters, J. R., Lau, N., Reis, L. P., \\\\& Neumann, G. (2015). Model-Based Relative Entropy Stochastic Search. In Neural Informaion Processing Systems (pp. 3537\\u20133545).\\n- Akrour, R., Pajarinen, J., Neumann, G., \\\\& Peters, J. (2019). Projections for Approximate Policy Iteration Algorithms. In Proceedings of the 36th International Conference on Machine Learning (pp. 181\\u2013190). \\n- Pajarinen, J., Thai, H. L., Akrour, R., Peters, J., \\\\& Neumann, G. (2019). Compatible Natural Gradient Policy Search. Machine Learning, 108(8\\u20139), 1443\\u20131466.\"}",
"{\"title\": \"Differentiable Trust Region Layers for DRL - why these three?\", \"review\": \"The authors explore the use of KL, 2-Wasserstein, and Frobenius norm in order to derive trust region projections in DRL. The topic is relevant and novel - specially given the prevalence of TRPO and PPO in RL in recent years.\\n\\nThe paper is well-written and structured, with a nicely written related work section. I find the study very relevant and useful but somewhat incomplete. I would like the paper to better motivate\\n\\n- why these three (KL, Wasserstein, Frobenius) were chosen. By the way, be precise with the use of the word metric (beginning of Sec 4) when referring to KL. It would be nice to extend the same analysis for the wider families of metrics and divergences that these three are part of.\\n\\n- the rationale to select the right one for a given problem.\\n\\nI also find that the discussion on entropy control, although interesting on its own, somehow distracts from the main message of the paper.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Review for Paper3480\", \"review\": [\"### Summary\", \"In trust-region-based policy optimization methods such as TRPO and PPO, it is difficult to tune and lots of approximations are required. The authors try to solve this issue by introducing the closed-form derivation of trust regions for Gaussian policies with three different types of divergence (or distance). Based on the theoretical derivation, the differentiable layer is proposed, where the layer is built upon \\u201cold\\u201d policy during the trust-region-based policy updates. The difference comes from the use of various divergences (or distances) are given in theoretical and empirical ways.\", \"### Quality\", \"The proposed idea seems interesting and theoretical derivations seem mostly sound but empirical performance doesn\\u2019t support the authors\\u2019 claim, which makes me highly decrease the score.\", \"### Clarity\", \"The Introduction is written mostly well although some sentences are too specific to be understandable at the first glance.\", \"The Related Work is well-summarized and emphasizes the difference and advantages clearly.\", \"Section 4 (which is about the main ideas) needs to be more clarified and reorganized for some parts.\", \"The use of entropy projection is quite abrupt and I couldn\\u2019t understand its clear motivation.\", \"Successive policy updates seem to improve the methods in a way that the mean is well-bounded during updates, but its explanation and justification are difficult to understand.\", \"There are more comments in `Detalied Comments`\", \"### Originality\", \"I really enjoyed reading projections of mean and covariance (~4.2) and ideas are somewhat novel. However, empirical results don\\u2019t support the authors\\u2019 claim in the sense that the differentiable trust-region layer doesn\\u2019t improve the performance significantly.\", \"### Significance\", \"The idea is interesting, but its significance is low due to the empirical performances as well as loosely tuned baseline (PPO).\", \"Additionally, some motivations/derivations/links among equations are unclear.\", \"### Detailed comments\", \"(p.1, Abstract) code-level optimization\", \"I was curious about the definition at the first glance.\", \"(p.1, Introduction) our method comes with the benefit of imposing the constraints on\", \"the level of individual states, allowing for the possibility of context dependent trust regions.\", \"I couldn\\u2019t understand what this means at the first glance.\", \"(p.1, Introduction) Considering the trust region layers require\", \"Considering the trust region layers requires\", \"(p.1, Introduction) the new policy without trust layers has to stay close to the output of the projection layer.\", \"I don\\u2019t think such a specific methodology needs to appear in the introduction.\", \"(p.2, Related Work) However, Engstrom et al. (2020) and Andrychowicz et al. (2020) recently showed that code-level optimizations are essential for achieving state-of-the-art results with PPO\", \"The meaning of code-level optimization needs to be clearer.\", \"(p.3, Related Work) They use either an exponential or linear decay of the entropy during policy optimization to control the exploration process and escape local optima. To leverage those benefits, we embed this entropy control mechanism in our differentiable trust region layers.\", \"I\\u2019d rather add simple maths to describe what the previous works did for clarity.\", \"(p. 3, Preliminaries and Problem Statement) Eq. (1)\", \"In the subscript of the expectation, I\\u2019d rather add either $t=1, \\u2026$ or trajectory distribution.\", \"$A^\\\\pi$ -> $A^{\\\\pi_{\\\\mathrm{old}}}$?\", \"I think using bold letters for all state-action pairs is a bit confusing. I\\u2019d use bold letter only for random variables and plain letters for non-random variables.\", \"(p.3, Preliminaries and Problem Statement) Using a constraint on the maximum KL over the states has been shown to guarantee monotonic improvement of the policy (Schulman et al., 2015a). However, as all current approaches do not use a maximum KL constraint but an expected KL constraint, thus the monotonic improvement guarantee also does not hold exactly.\", \"I think this sentence should be emphasized.\", \"(p.3, Preliminaries and Problem Statement) Gaussian policies ~ as well as\", \"as well as -> and?\", \"(p.3, Preliminaries and Problem Statement) the commonly used reverse KL\", \"The expression `commonly used` here seems weird to me.\", \"(p.3, Preliminaries and Problem Statement) The similarity of the covariance is measured by the difference in entropy of both distributions\", \"This statement seems incorrect since entropy is an averaged valued of negative log probability and KL is not exactly the difference between two entropies.\", \"(p.3, Preliminaries and Problem Statement) as it is always non-negative\", \"since it satisfies the criteria of being a divergence.\", \"(p.3, Preliminaries and Problem Statement) as the distance measure is then more sensitive to the data-generating distribution.\", \"If I understood correctly, the distance is defined by using the covariance of the old policy distribution (similar to the distance proposed in Dadashi et al., 2020 that only cares about diagonal covariance matrix), but how is this related to the sensitivity w.r.t. the data-generating distribution?\", \"(p.4, Differentiable Trust-Region Layers for Gaussian Policies) Additionally, we extend the trust region layers to include an entropy constraint to gain control over the evolution of the policy entropy during optimization\", \"I\\u2019d rather use this sentence where the formula for entropy constraint appears.\", \"(p.4, Differentiable Trust-Region Layers for Gaussian Policies) The trust regions are defined by means of a distance or divergence ~\", \"In my understanding, it\\u2019s not a mean over distance, but just a distance between new and old policies.\", \"(p.4, Differentiable Trust-Region Layers for Gaussian Policies) Note that $\\\\mu$, $\\\\Sigma$ are state-dependent, which we will however neglect for ease of notation.\", \"I\\u2019d rather keep the dependencies on states since it\\u2019s a bit confusing to understand the state dependencies of objectives (3) and (4).\", \"If I understood correctly, (3) and (4) will be optimized for each $s\\\\in\\\\mathcal{S}$, and thus, we can use the solution of projection over all states the old policy can visit.\", \"(p.4, Differentiable Trust-Region Layers for Gaussian Policies) The output of the trust region layer is then considered to be the new policy.\", \"Since (3) and (4) are objectives, I\\u2019d rather state like \\u201cWe desire the output of the trust region layer becomes the parameters of the new policy and formulate the trust region layer as follows.\\u201d\", \"(p.4, Differentiable Trust-Region Layers for Gaussian Policies) As all distances or divergences used in this paper can be decomposed into a mean and a covariance dependent part\", \"This may be since we don\\u2019t update the second distribution of all metrics -- therefore, $\\\\Sigma_2$ is fixed -- where the old policies will be plugged in. Such an explanation seems to be needed.\", \"(p.4, Differentiable Trust-Region Layers for Gaussian Policies) as this gives the algorithm more flexibility\", \"I understand the way the trust region will be used, but this statement is weird since joint optimization is much more general and flexible in my understanding.\", \"(p.4, Differentiable Trust-Region Layers for Gaussian Policies) where $d_\\\\mu$ is the mean dependent part and $d_\\\\Sigma$ is the covariance dependent part of the employed similarity measure\", \"The subscripts are confusing since $\\\\mu$ and $\\\\Sigma$ is used as input arguments of (3) and (4). $\\\\mu$ and $\\\\Sigma$ are the output parameters of Gaussian policy which seem to be fixed during the optimization of (3) and (4). This should be stated for clarity.\", \"(p.4, Differentiable Trust-Region Layers for Gaussian Policies) All three trust region projections\", \"\\u201cThree\\u201d indicates different distances, so it should be linked with the previous equations on distances.\", \"(p.4, Differentiable Trust-Region Layers for Gaussian Policies) By making use of the method of Lagrangian multipliers,~\", \"Link Appendix A.2. for readers.\", \"(p.5, Wasserstein Projection) To find the projected covariance ~\", \"I\\u2019d rather put this sentence after the sentence \\u201cHowever, in practice we found this approach to be numerically unstable.\\u201d\", \"(p.5, Wasserstein Projection) For the more general case of arbitrary covariance matrices, we would need to ensure the matrices are sufficiently close together, which is effectively ensured by Equation 6.\", \"I don\\u2019t fully understand what the authors intended to say.\", \"(p.5, Wasserstein Projection) Note however, that here we chose the square root of the covariance matrix ~\", \"Is there an advantage of using the square root of the covariance matrix? Also, it would be helpful if the definition of the square root of the matrix is given.\", \"(p.6, Figure 1) Entropy of the interpolated covariances\", \"Covariance cannot define entropies. We should use \\u201cEntropy of the interpolated distributions\\u201d\", \"(p.6, Entropy Projection)\", \"This is a bit abrupt to me since trust region w.r.t. old policy has been considered until page 5. At the first glance, I couldn\\u2019t understand why entropy projection is needed and how the scaling is related to the exploration. A more intuitive explanation is needed.\", \"(p.6, Analysis of the Projections) A paragraph with the sentence \\u201cIt is instructive to compare the three projections.\\u201d\", \"I\\u2019d rather use equations for a detailed explanation.\", \"(p.7, Successive Policy Updates) The above projections can directly be implemented for training the current policy. However, for successive policy updates the projections become rather impractical. Each projection would rely on calling the projection of the preceding policy.\", \"I couldn\\u2019t understand this part. My best understanding was using stated projections for policy updates is impractical, but how each projection is related to preceding policy is unclear.\", \"(p.7, Successive Policy Updates) The most intuitive way to solve this problem is to use the existing samples for additional regression steps after the policy optimization. Yet, this adds a computational overhead.\", \"I couldn\\u2019t understand this part.\", \"(p.7, Successive Policy Updates) Eq (10)\", \"$\\\\tilde{\\\\pi}$ seems \\u201dDifferentiable trust-region layer\\u201d and needs to be mentioned.\", \"(p.7, Experiments) For our experiments, the PPO hyperparameters are based on the original publication (Schulman et al., 2017). The PAPI projection as well as its conservative PPO version are executed in the setting sent to us by the author. For our projections, all parameters are selected with Optuna (Akiba et al., 2019)\", \"PPO is a baseline here but seems naively tuned.\", \"(p.7, Table 1) We trained ten agents on different seeds\", \"It seems ten seeds, not ten agents?\", \"PPO works much better in Hopper and Walker2d, which is different from the statement in the main context (\\u201cThe results show that our trust region layers are able to perform similar or better than PPO and PAPI across all tasks\\u201d)\", \"(p.7, Figure 2) Left\", \"The result doesn\\u2019t seem statistically significant. Means of proposed methods are within the confidence interval of PPO.\", \"### References\", \"Dadashi et al., 2020, \\u201cPrimal Wasserstein Imitation Learning\\u201d\", \"---\", \"### Response to Authors\", \"I'm satisfied with your responses, especially strengthened experimental results and the clarification of methods in the revision. As my concerns were on doubtful empirical results in the submission---although I thought the approach of the submission was sufficiently novel---I updated my score from 4 to 6 accordingly. I don't know if the authors will keep working on this direction, but I think it will be interesting to see the performance of *off-policy RL* with the proposed method.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Good and well-written paper, but experiment require more analysis\", \"review\": [\"### Summary\", \"The paper proposes a way to impose trust region restrictions via projections when doing policy optimisation in Reinforcement Learning. The projections have a closed form and enforce a trust region for each state individually. The authors propose three types of projections based on Frobenius, Wasserstein distances and KL divergence. They compare them to the existing methods (PPO, PAPI) and provide some insights about their behaviour.\", \"### Pros\", \"The paper is coherent and clearly written.\", \"The paper has a clear motivation and the research question.\", \"The paper has an extensive and detailed \\\"Related Work\\\" section.\", \"I find the analysis of the projections and the result of Theorem 1 extremely interesting and insightful. However, I did not check the proof in the Appendix.\", \"The Appendix has many details useful for further understanding of their approach and reproducing the results.\", \"### Cons\", \"I find some claims not supported by enough evidence (see Questions).\", \"I think the experimental section requires more analysis. It's fine not to beat the existing work with 2x better results, but there should be a thorough discussion of what the results mean (see Questions).\", \"Related work comes before the Background, and sometimes it's quite difficult to understand the details of the prior work and their relation to the paper I am reviewing. \\\"Projections for Trust Regions\\\" subsections could have more details on positioning/comparing the proposed approach to prior work.\", \"### Reasoning behind the score\", \"I enjoyed reading this paper. The story flows coherently and logically. The problem the authors consider is important, and the proposed solution is reasonable theoretically and practically. However, the paper sometimes makes a claim without providing evidence to support it. The paper compares itself with the existing strong methods, however, I find the results section lacks analysis regarding this comparison.\", \"### Questions to the authors\", \"Your method can impose per-state trust regions. How can you show that this is beneficial? Why is having this beneficial? Can you provide an ablation for this?\", \"You claim that your method is \\\"more stable and less dependent on code-level optimizations\\\". I don't think you support this claim anywhere in the paper. How can you support that?\", \"You mention projections in the introduction, but explain it only in the Related Work section. Can you somehow introduce it earlier?\", \"In the last paragraph of 'Approximate Trust Regions', you mention RL as Inference, EM and Song et al. Can you explain the pros and cons of their approaches? Why is your approach still needed?\", \"I think, there should be a clear description of what a 'differentiable projection layer' in more details.\", \"What is the exact RL algorithm you use for optimisation? Can you provide the pseudocode in the appendix? Can you describe what you mean exactly by \\\"successive policy updates\\\"?\", \"The plots shades overlap and it's really hard to say which method is better and if that's due to randomness or not. I would like to see more discussion on what the numbers tell us. If your method properly imposes trust regions, but the results are comparable to PPO, does it mean that approximate trust regions are okay? You say that \\\"Standard PPO is using a lot of code-level optimizations which are not used by our approach\\\". How can you interpret your results in the light of this? If your method is comparable to PPO without code-level optimisations, what does this tell about your method? Are there any other existing benchmarks which show the superiority of your method? Can you predict some settings, where you method will be significantly better than PPO?\", \"In the introduction, you say that 'Due to the approximations, they [PPO, TRPO] violate the constraints or fail to find the optimal solution within the trust region'. However, you also approximate the trust region in Section 4.4. Why is approximating okay for you, but not okay for PPO/TRPO?\", \"### Additional feedback not affecting the score\", \"You do not include the initial state distribution to the definition of the MDP, without it, the transition function under the expectation in Equation 1 does not make sense ($\\\\mathcal{T}(\\\\cdot | s_{-1} a_{-1})$ for $t=0$. Same about the trajectory distribution.\", \"typo \\\"covarinces\\\" on page 5\", \"It took me a while to parse Equations 3 and 4 before I realised that parameters in the minimisation problem and in the constraint differ. Can you give a reference to such an optimisation problem in the existing literature (e.g. Boyd's book or somewhere else)?\", \"In Section 4, when you describe the projections and Lagrangian multipliers, the results come a bit out of the blue (e.g. 4.1). You have more details in the Appendix, but you do not refer to them from the main text.\", \"\\\"We trained ten agents on different seeds for each method\\\" in Table 1 caption sounds a bit confusing. Should it be 'for each environment'? Otherwise, it sounds as if you used 10 seeds for a method (2 per each of the five environments).\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
xppLmXCbOw1 | Self-supervised Visual Reinforcement Learning with Object-centric Representations | [
"Andrii Zadaianchuk",
"Maximilian Seitzer",
"Georg Martius"
] | Autonomous agents need large repertoires of skills to act reasonably on new tasks that they have not seen before. However, acquiring these skills using only a stream of high-dimensional, unstructured, and unlabeled observations is a tricky challenge for any autonomous agent. Previous methods have used variational autoencoders to encode a scene into a low-dimensional vector that can be used as a goal for an agent to discover new skills. Nevertheless, in compositional/multi-object environments it is difficult to disentangle all the factors of variation into such a fixed-length representation of the whole scene. We propose to use object-centric representations as a modular and structured observation space, which is learned with a compositional generative world model.
We show that the structure in the representations in combination with goal-conditioned attention policies helps the autonomous agent to discover and learn useful skills. These skills can be further combined to address compositional tasks like the manipulation of several different objects. | [
"self-supervision",
"autonomous learning",
"object-centric representations",
"visual reinforcement learning"
] | Accept (Spotlight) | https://openreview.net/pdf?id=xppLmXCbOw1 | https://openreview.net/forum?id=xppLmXCbOw1 | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"exEnhLxCUwh",
"NG9YZfmgdh",
"Y-k9S-NiB5J",
"FG1tgmJK3z",
"pyCK24SAygT",
"wU8z9qQ8Si",
"Yxid9OkGhtX",
"-RZY6NOEA9D",
"2KrV0RH7yY",
"AJF2Pptj7Ou",
"k7tx7t28rje",
"PgXrL96qlba",
"LmH31ZH8-OD",
"1KyMSB9iEek",
"BdHSLGJnp3-"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040362417,
1606297564429,
1606297455056,
1605910831410,
1605865502177,
1605780783994,
1605780463734,
1605779983824,
1605779562553,
1605779452094,
1605777484014,
1603902916172,
1603900091828,
1603898297282,
1603638930820
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3479/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3479/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3479/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3479/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3479/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3479/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3479/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3479/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3479/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3479/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3479/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3479/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3479/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3479/AnonReviewer4"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Spotlight)\", \"comment\": \"This paper proposes a self supervised learning algorithm to compute object-centric representations for efficient RL in the context of robot manipulation tasks.\\n\\nThe key idea is to learn an object-centric representation (using prior work on SCALOR) and use this to intrinsically generate goals for a SAC policy to achieve. The policy is a goal-conditioned attention policy. The evaluation metric is a set of tasks to manipulate objects for a visual rearrangement task. \\n\\n${\\\\bf Pros}: $\\n1. The baselines are reasonable and consist of other unsupervised RL algorithms in recent literature. \\n\\n2. Object-oriented RL is a growing area of interest and this paper proposes a reasonably novel and validated set of ideas in this domain. I believe it will be of significant interest and potentially make an impact on research in robotics and deep reinforcement learning.\\n\\n3. The goal-conditioned attention policy can handle realistic scenarios, namely -- multi-object manipulation tasks\\n\\n4. The attention mechanism also provides a reasonable solution to mitigate combinatorial hardness in multi-object environments\\n\\n${\\\\bf Cons}$: \\n\\n1. Some of the reviewers felt that the experimental results from pixel inputs could have been pushed further. However, since the setup and algorithm is relatively novel, there are already many moving parts and this paper seems like a step in that direction\\n\\n2. Experiments with larger set of objects would have been interesting to investigate and report.\"}",
"{\"title\": \"Added references\", \"comment\": \"Thank you for pointing out those references. We included them into our related work section.\"}",
"{\"title\": \"Final Revision\", \"comment\": \"Dear reviewers, we uploaded a final revision where we cleaned up some minor details and added a few more references.\\nThank you for engaging with us during the review period!\"}",
"{\"title\": \"Update\", \"comment\": \"Thank you for running the additional experiments, and I understand why the two initially proposed ablations are infeasible. The representation learning visualizations and ablations helped to understand how SMORL is working, and my concerns have been addressed.\\n\\nRelated work is definitely better now, but still quite terse. There is more work that could be covered in self-supervised RL (eg. [1, 2]).\\n[1] Unsupervised control through non-parametric discriminative rewards. Wade-Farley et al. 2018.\\n[2] Active learning of inverse models with intrinsically motivated goal exploration in robots. Baranes et al. 2013.\\n\\nIn light of the revisions I have raised my score.\"}",
"{\"title\": \"Thank you for your detailed response\", \"comment\": \"Thank you for your detailed response. The response resolves most of my questions and concerns, and thus increase my rating from 4 to 5.\\n\\nHowever, the reviewer considers the scalability of the proposed goal-conditioned attention policy as the primary contribution of the paper and this is not empirically shown in the paper. Therefore, the experiments on Visual Push and Visual Rearrangement experiments with 3-4 objects are essential to make this paper convincing.\"}",
"{\"title\": \"Response to Reviewer #4\", \"comment\": \"Dear reviewer, thank you for your positive evaluation of our work and for the constructive feedback.\\n\\n\\u201cI would also be interested in a visualization of the latent object representations learned by SMORL.\\u201d\\n\\nWe have implemented different analyses of SCALOR\\u2019s representation, which are presented in the Appendix A of the updated paper, in particular concerning your questions:\\n - A clustering of $z^\\\\mathrm{what}$ components reveals a clear separation of the underlying objects, see A.1.\\n - We have added some trajectory traversals in A.3.\\n\\n\\u201cThere is no discussion of the computation cost of SMORL in comparison to the baselines (SAC with HER, RIG, Skew-Fit).\\u201d \\n\\nThanks a lot for pointing this out. We have added this discussion into section 5.2 of the paper.\\nThe main computational cost is in representation learning and the encoding of images to representations during the data collection for RL. Here, we can provide relative comparisons. While SCALOR training takes longer than VAE training because of SCALOR utilizing recurrence over the sequence of observations, it can be done once and then used throughout RL training. Thus, RL training for RIG and VAE is comparable as they also use their trained encoders for inference only. Skew-Fit training on the other side takes much more time as it requires additional VAE re-training during the RL phase. \\n\\n*\\u201cThe \\u201ccompositional\\u201d aspect is unclear in the Experiments section. How does the \\u201ccompositional generative world model\\u201d translate to productivity, substitivity or other forms of compositional generalization with respect to the objects in the image?\\u201d*\\n\\nOur architecture creates a factorized representation by design. Also, position and appearance are additionally decoupled. The experiments shown in Fig. 3 with 3 and 4 objects show empirical evidence for its effectivity. The number of different goal configurations grows exponentially with the number of objects. Thus we address a form of compositionality where the tasks are *composed* of largely independent subtasks (namely moving one object at a time). To clarify our scope we reworded the passages addressing the compositionally. We also show the generalization capabilities of our system on unseen tasks with a different number of objects at test time (see 5.3 of updated paper).\\n\\n*\\\"Experiments with a larger set of objects would help in highlighting the advantage of SMORL in a general multi-object visual RL setting.\\\"*\\n\\nUnfortunately, we do not have enough computation resources to run all the hyperparameter optimization etc during the rebuttal period for more objects. The experiment reported in Fig. 3 with 3 and 4 objects show that good representations will transfer to good performance using our policy architecture. However, we agree that this is an important future direction.\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"Dear reviewer, thanks a lot for your review. We are glad that you like our work.\\n\\n*\\u201cHow the \\u201cevaluation on a novel task\\u201d is done wasn\\u2019t as clear (i.e. when we try to implement a given goal or sequence of goals)\\u201d*\\n\\nThe algorithm sequentially attempts to solve all the recognized sub-goals with different $z^{\\\\mathrm{where}}$. We clarified this in the paper, also described precisely in Algorithm 3 in the Appendix. We believe that using a more sophisticated planning algorithm would allow for solving more complex tasks during evaluation. We discuss this in the paper now (Conclusions).\\n\\n*\\u201cWhat happens when some of the \\u201cobjects\\u201d aren\\u2019t achievable or controllable?.\\u201d*\\n\\nRight now, the agent would still attempt to \\u201csolve\\u201d these tasks. At test time, the agent has several attempts to solve sub-goals with a limited time-budget, so it would not get stuck on these tasks. A discussion and future work direction is added to the Conclusion.\\n\\n*\\u201cE.g. you see a slot which represents the robot arm, is this treated differently?\\u201d*\\n\\nThe arm is typically represented by its own slot. It is not treated differently, but just considered as one of the discovered objects that are controllable. This way, one of the discovered tasks is actually reaching (i.e. manipulating yourself) of the robot arm to a specific location. We have verified that robots do quickly learn this sub-task also and can include this in the Appendix if needed. \\n\\n*\\u201cHow good are SCALOR representations in your environment?\\u201d*\\n\\nThis is a really good question. We have implemented different analyses of SCALOR\\u2019s representation, which are presented in the Appendix A of the updated paper. To summarize: \\nA clustering of $z^\\\\mathrm{what}$ components reveals a clear separation of the underlying objects, see A.1.\\nThe representation of SCALOR is highly disentangled according to the Mutual Information Gap (MIG) measure in contrast to VAE representations, see A.2. As you have observed correctly, this requires a matching which we do using the clustering above. \\n\\n*\\u201cDid you try continuing training SCALOR in the RL phase? It would make the results stronger and less reliant on a good random exploration strategy.\\u201d*\\n\\nFor this work, we restricted ourselves to training SCALOR on data from a random policy. As online training of SCALOR would take much more computational resources and potentially would be less stable (this intuition is supported by comparing the online Skew-Fit architecture with the passive RIG method). Nevertheless, it is an interesting future work direction (see second part of conclusion in the updated paper). \\n\\n*\\u201cThe explanation about the issue of using tracking as part of the model directly wasn\\u2019t especially clear to me. It might deserve a bit more expansion, especially in the Appendix?\\u201d*\\n\\nWe provide more detail in Appendix E. The imperfection of the tracking algorithm provided by SCALOR can be exploited by the RL agent. \\n\\n*\\u201cCould you add more samples of the environment\\u2019s observations?\\u201d*\\n\\nWe have added some trajectory traversals in A.3.\\n\\n*\\u201cIt seems like the environment chosen is extremely similar to the standard Gym Fetch environment, did you try using it instead?\\u201d*\\n\\nThe reason for choosing the multiworld environments is because it was used in the literature on visual RL. In principle the Gym Fetch environment could be modified to provide visual observations and multiple objects. \\n\\n*\\u201cIt is not entirely true that MONet/IODINE \\u201cdo not contain disentangled and interpretable features like position and scale\\u201d. \\nIt is true that they are not explicitly enforced (like done in SCALOR), but they do arise quite easily purely unsupervised. \\u201d*\\n\\nThanks for spotting this. We have clarified this point in the updated paper.\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"Dear reviewer, thank you for your positive evaluation of our work and for the constructive feedback.\\n\\n*\\u201cThe paper would be strengthened by direct analysis of this hypothesis: for instance, correlations or measures of disentanglement between ground truth state and learned representation for SCALOR vs. VAE in the environments tested\\u201d*\\n\\nThis is a really good suggestion, which we implemented now. We have added several parts in appendix that are addressing SCALOR\\u2019s representation properties. In particular, we have added a disentanglement analysis in App A.2, where we compute the mutual information gap (MIG) for VAE and SCALOR representations. As the VAE\\u2019s MIG scores were low for 1 and 2 objects, we decided to add only one comparison, however, if needed, other comparisons could be also added. In addition, we have added a mutual information matrix to show a more detailed picture of the disentanglement of SCALOR components. \\n\\n*\\u201cAt test time, potentially many $z^\\\\mathrm{where}$ could be different, so do you cycle between all the objects? \\u201d*\\n\\nYes, we sequentially try to solve all the recognized sub-goals with different $z^\\\\mathrm{where}$. We clarified this in the paper, also described precisely in Algorithm 3 in the Appendix. \\nWe discuss this aspect in the conclusion now (see second part of conclusion in the updated paper). We believe that using a more sophisticated planning algorithm would allow for solving more complex tasks during evaluation.\\n\\n*\\u201cThe paper would benefit from an ablation where you explore the choice of attention architecture; you could imagine that simply learning the object-centric representation and treating it as a flat representation like a VAE could also provide gains\\u201d*\", \"the_choice_of_our_policy_was_motivated_by_two_different_properties\": \"- SCALOR representation are unordered, potentially different-sized sets\\n - The policy should be able to concentrate on parts of the representation set\\n\\nDue to the first property we can not directly convert SCALOR representations to a fixed-length vector (the number of recognised objects can change, e.g. because of occlusions). We have done an ablation study where we explored 3 different choices of policies compatible with SCALOR representations. The results are presented in Appendix B of the updated paper. First, we checked the importance of goal-conditional heads and an unconditional head in our architecture. We find out that without the goal-conditional heads, the SMORL algorithm performs significantly worse (while both are contributing to the final performance), showing the importance of goal-conditioned attention. Next, we substitute the goal-conditioned attention module with the Set aggregation algorithm called DeepSet [1]. We observe that SMORL with DeepSets can also perform competitively on the two objects tasks, however, it is significantly worse on the out-of-distribution (OoD) task with one object. This shows that goal-condition attention contributes to the OoD generalization properties of our architecture. \\n\\n*\\u201cThe proposed reward function would be better evaluated with an ablation where you use the original reward from prior work $-||z - z_g||$.\\u201d* \\n\\nIndeed, such an ablation would be interesting, however given that representations are unordered it would require an additional matching of the recognized objects of current input set $z$ and the goal set $z_g$, so it cannot be performed.\\n\\n*\\u201cSeeing longer versions of the learning curve that show asymptotic performance would help understand this better\\u201d*\\n\\nThanks for mentioning this. We now have trained both SMORL and RIG twice longer and indeed get improved SMORL performance with average distance to the goal = 0.14 (2 objects), see plots in App. C. As the performance curve of SMORL still suggests further improvements with even more training, we will train longer and update the plots accordingly (not feasible in the rebuttal period due to computational restrictions). \\n\\n*\\u201cThe related work is a bit thin and narrow - it addresses the nearest-neighbor works well but does not address self-supervised methods and robotics methods more broadly\\u201d*\\n\\nWe decided to add an introductory passage where we cover the importance of self-supervised methods and visual RL. There, we also covered how self-supervised methods are used in goal-based RL as this could be less known. We also moved the related work section after the introduction, as per your suggestion.\\n\\n*\\\"It would be good to explain further what type of information this would include, and perhaps describe formally in the appendix.\\\"*\\n\\nThanks for spotting this. We added a detailed description of the unconditional goals in Appendix D (D.1 of the updated paper).\\n\\n[1]: Zaheer, M. et al. Deep Sets. Advances in Neural Information Processing Systems 30, 3391\\u20133401 (2017).\"}",
"{\"title\": \"Response to Reviewer #3 Part 2/2\", \"comment\": \"*\\u201cThe baseline could include recent visual policy learning methods using data augmentation [1,2].\\u201d*\\n\\nThanks for this suggestion. However, we believe these methods are not applicable for our setting because 1) they are designed for a single-task setting, whereas we are targeting a multi-task setting, 2) these methods rely on a reward signal being provided to construct augmented data, whereas we have no external supervision available. To our knowledge, these methods have not yet been applied to the setting we are considering, and therefore it is not clear to us how to use them as baselines.\\n\\n*\\u201cIt would be more convincing if the proposed method can reasonably deal with more than 2 objects.\\u201d*\\n\\nUnfortunately, we do not have enough computation resources to run all the hyperparameter optimization, etc during the rebuttal period for more objects. The experiment reported in Fig.3. with 3 and 4 objects shows that good representations will transfer to good performance using our policy architecture. However, we agree that this is an important future direction. \\n\\n*\\u201cThe website link is provided but nothing is there.\\u201d*\\n\\nWe are sorry for delaying the website upload. We have updated the website with visualization of SMORL trained on GT representations and SCALOR representations. Code will follow.\"}",
"{\"title\": \"Response to Reviewer #3 Part 1/2\", \"comment\": \"Dear Reviewer 3, thank you for your review.\\n\\n*\\\"The paper claims \\\"Self-supervised RL\\\" but it is not clear which part of the method is trained with self-supervised learning. The proposed method seems to consist of unsupervised representation learning and reinforcement learning.\\\"*\\n\\nIn our setting, both goals and rewards are constructed intrinsically from observations, and in this sense our method is self-supervised. Generally, the term self-supervised RL refers to methods that acquire a diverse repertoire of general-purpose robotic skills without reward signals using only observations. These skills can be reused and combined during test time. This view seems consistent with the literature we examined. To clarify this we added an introductory passage to the related work where we covered how self-supervision is used in goal-based RL without external rewards.\\n\\n*\\\"The proposed method assumes that the sub-goals are independent of each other but it is not true in many cases, e.g., collisions between objects.\\\"*\\n\\nIn the tasks we are considering, sub-goals are mostly independent as it is possible to achieve each of them independently without influencing other sub-goals. However, we agree that going towards more complex tasks such as object stacking would require rethinking this assumption, and is very interesting for future work. Therefore, we now mention this as a potential future direction in Sec. 6.\\n\\n\\n*\\\"One of the claims in the paper is that the proposed representations and policy can work with a variable number of objects but the experiments do not cover this setup.\\\"*\\n\\nWe have added an additional experiment (see 5.3) where we evaluate our policy trained on 2 objects on 1 objects environment, showing that its performance is comparable to a policy trained on only one object. \\n\\n\\n*\\\"It could be good to show the quality of learned representations, such as object pose prediction and classification.\\\"*\\n\\nWe have implemented different analyses of SCALOR\\u2019s representation, which are presented in the Appendix A of the updated paper. To summarize: \\n\\n - A clustering of $z^\\\\mathrm{what}$ components reveals a clear separation of the underlying objects (see A.1), which indicates that the representations can easily be used for tasks such as classification.\\n - The representation of SCALOR is highly disentangled according to the Mutual Information Gap (MIG) measure in contrast to VAE representations, see A.2. \\n\\n\\n*\\\"How does the policy decide when to switch to the next sub-goal?\\\"*\\n\\nThe algorithm sequentially attempts to solve all the recognized sub-goals. We now clarify this in the paper and also added Algorithm 3 in the Appendix that describes it precisely. We also discuss other potential approaches to implement this in Section 6 now.\\n\\n*\\\"The training time might be too short. The proposed method can be trained more (e.g., 1e6 environment steps).\\\"*\\n\\nWe now have trained both SMORL and RIG twice longer and indeed get improved SMORL performance with average distance to the goal = 0.14 (2 objects), see plots in Appendix C. We will replace the plots in the main paper when all curves are finished. \\n\\n*\\\"Although the proposed method shows better learning performance in the Visual Rearranging task, the improvement is marginal to claim the proposed method can solve the tasks.\\\"*\\n\\n As we report above, longer training shows improvements upon the initially reported results. Nevertheless, it also alerts that progress is needed on the representation side, to close the gap to the ground truth curves. Regarding this gap, there is an inherent problem: the reward for training is defined on the camera image, which is actually having a slant view on the scene (as used in [1]). Thus the measure that is optimized by the agent is not directly matching our evaluation criterion.\\n\\n[1]: Nair, Ashvin V., et al. \\\"Visual reinforcement learning with imagined goals.\\\" Advances in Neural Information Processing Systems. 2018.\"}",
"{\"title\": \"General Response\", \"comment\": [\"Dear reviewers, we now updated our paper to address your comments and questions. In particular, we added\", \"An out-of-distribution generalization experiment (Section 5.3)\", \"An analysis of the learned SCALOR representations in terms of clustering (Appendix A.1) and disentanglement (Appendix A.2)\", \"Environment traversals and how SCALOR processes them (Appendix A.3)\", \"An ablation study about the impact of different choices for attention heads (Appendix B)\", \"Curves for longer training time for the Visual Rearrange 2 objects experiment (Appendix C)\", \"An expanded related work section (Section 2) in which we more broadly address the literature around our method\", \"We now also added videos of the learned policies to the project website. We apologize for the delay in doing so. We will address individual questions and concerns as direct answers to the reviews. If you have any further requests or questions that we could address in the remaining rebuttal period, please do not hesitate to comment.\"]}",
"{\"title\": \"Interesting combination of object-centric representations and RL but insufficient experimental results\", \"review\": [\"### Summary\", \"The paper proposes to use object-centric representations for RL, which can efficiently handle multiple objects in the scene. To learn a policy that can take a variable number of object observations, the paper proposes the goal-conditioned attention policy, which can focus on objects of interests to achieve each sub-goal, and thus reduce the combinatorial complexity of multiple objects. The goal-conditioned attention policy can be efficiently trained with hindsight experience replay on the object-centric goal representations. The experiments demonstrate the superior performance of the goal-conditioned attention policy on dealing with multiple objects.\", \"### Strengths\", \"The idea of learning composable object-centric visual representations and goal-conditioned attention policy is an intuitive and plausible way to tackle combinatorial challenges in multi-object manipulation tasks.\", \"The experiments with ground truth states show that the proposed goal-conditioned attention policy can effectively handle multiple object manipulation tasks.\", \"### Weaknesses\", \"Based on Figure 4, any of the methods without ground truth states solve the tasks. Although the proposed method shows better learning performance in the Visual Rearranging task, the improvement is marginal to claim the proposed method can solve the tasks.\", \"Why only up to 2 objects are considered in Figure 4? The proposed method has the advantage of dealing with multiple objects but did not show the benefit. It would be more convincing if the proposed method can reasonably deal with more than 2 objects.\", \"The baseline could include recent visual policy learning methods using data augmentation [1,2].\", \"The paper claims \\\"Self-supervised RL\\\" but it is not clear which part of the method is trained with self-supervised learning. The proposed method seems to consist of unsupervised representation learning and reinforcement learning.\", \"The proposed method assumes that the sub-goals are independent of each other but it is not true in many cases, e.g., collisions between objects.\", \"One of the claims in the paper is that the proposed representations and policy can work with a variable number of objects but the experiments do not cover this setup.\", \"### Questions and additional feedback\", \"It would be better to include the architecture of the visual encoders for baselines and the proposed method.\", \"It could be good to show the quality of learned representations, such as object pose prediction and classification.\", \"The training time might be too short. The proposed method can be trained more (e.g., 1e6 environment steps).\", \"The website link is provided but nothing is there.\", \"How does the policy decide when to switch to the next sub-goal? Including how to rollout an episode toward sequential sub-goals with the proposed model would be helpful.\", \"### Overall assessment\", \"The proposed method is intuitive and tackles an important problem of multi-object manipulation. However, the experiments and results are not yet convincing to claim the advantage in dealing with multiple objects. Overall, the reviewer thinks the paper requires more thorough experiments and is not ready to be published.\", \"[1] Laskin et al., Reinforcement Learning with Augmented Data\", \"[2] Kostrikov et al., Image Augmentation Is All You Need: Regularizing Deep Reinforcement Learning from Pixels\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Review\", \"review\": \"This work proposes to use object-centric unsupervised representation learning for self-supervised goal-conditioned RL, as opposed to prior work that assumes no particular structure on the learned representations (eg. VAEs). The proposed method, self-supervised multi-object RL (SMORL), uses the SCALOR architecture from prior work, then modifies the policy representation with single-object attention and also the reward function in RL with imagined goals (RIG). The results show that the method can learn simulated pushing and rearranging tasks in a self-supervised way with up to 4 objects in the scene, and outperforms RIG and Skew-Fit on pushing tasks. The proposed method is sufficiently novel, explores an important direction for self-supervised learning, and the results are quite strong.\\n\\nThe main motivation argued is that in more complex environments, it is difficult for fully unstructured reconstruction-based representation learning methods such as VAEs to recover a disentangled representation. This then causes difficulties running self-supervised RL algorithms such as RIG, which use the representation to compress the input, to set meaningful exploration goals, and for evaluating the reward. Using object-centric representations makes the representation more disentangled and improves RL, as demonstrated by the results. The paper would be strengthened by direct analysis of this hypothesis: for instance, correlations or measures of disentanglement between ground truth state and learned representation for SCALOR vs. VAE in the environments tested, particularly as number of objects increase.\\n\\nThe key novel contributions lie in how SCALOR is integrated with self-supervised learning. First, after learning the SCALOR representation from data, the proposed policy uses an attention mechanism to pay attention to reaching the goal for a single object at a time. One detail I did not understand was how the policy operates at test time. At training time, exactly one z^where is changed and the policy attempts to match that object. At test time, potentially many z^where could be different, so do you cycle between all the objects? Also, the paper would benefit from an ablation where you explore the choice of attention architecture; you could imagine that simply learning the object-centric representation and treating it as a flat representation like a VAE could also provide gains, so it is important to disentangle that effect from the novel policy architecture. The policy contribution is evaluated in Figure 3, where the success of SMORL+GT shows that the architecture makes manipulating a large number of objects possible.\\n\\nThe other differences to RIG have to do with goals and rewards. During self-supervised training, goals are sampled by sampling a new z^where for a single object, encouraging manipulation of exactly one object. This proposal seems logical, although in the long run it could potentially be an assumption that would not scale beyond object repositioning tasks. The reward is also modified to use the SCALOR latent, to penalize distance to the closest z^what object as the current goal with a threshold alpha for detecting the matching object. Again, the proposed reward function would be better evaluated with an ablation where you use the original reward from prior work -||z - z_g||.\\n\\nThe experiments show that the proposed method SMORL outperforms RIG and Skew-Fit on visual pushing tasks with many objects (and \\u201crearranging\\u201d, which is pushing with random initial positions of objects - having both sets of experiments potentially seems a bit redundant since it seems rearranging is strictly more difficult). SMORL is worse than an oracle (SAC+GT) which uses ground truth state information, but it seems to tend towards the oracle performance on even the more difficult tasks (seeing longer versions of the learning curve that show asymptotic performance would help understand this better).\\n\\nGenerally, the results on multi-object manipulation and self-supervised learning are strong. Further experiments as mentioned above would better allow the contributions to understood independently.\\n\\nMinor comments\\n\\nThe related work is a bit thin and narrow - it addresses the nearest-neighbor works well but does not address self-supervised methods and robotics methods more broadly; it would be best to use the related work to make the paper more understandable to the broader community who are not embedded in goal-conditioned RL (and potentially put it in at section 2 instead of at the end).\", \"page_6\": \"\\u201cOut code\\u201d -> \\u201cour code\\u201d\\n\\n\\u201cIn general, we expect that it is beneficial for the policy to not always attend to entities conditional on the goal; we thus allow some heads to only attend to additional learned parametric queries (left out above for notational clarity).\\u201d - did not understand this, it would be good to explain further what type of information this would include, and perhaps describe formally in the appendix.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Official Blind Review #2\", \"review\": \"This paper combines object-centric representations and self-supervised HER goal-conditioned policy learning to learn efficient RL policies for a robot manipulation task.\\n\\nThey use SCALOR as an object-based state representation, and use it to propose semantically meaningful goals for a SAC policy to achieve. They can then leverage this on new tasks to solve them efficiently..\\n\\nOverall, I found this paper very interesting, clearly written, well executed (with very sensible decisions throughout) and presenting several good ideas, especially in how to leverage the additional object structure effectively. It demonstrates good early results in the novel field of Object-oriented RL.\\n\\nI have a few comments/questions:\\n\\n1. The explanation of how the goal-conditioned policies were trained was very clear, and I especially like how you use z_what and z_where to construct novel meaningful goals (which will tend to just force the policy to move objects around, but that is a good prior for your environment!). However how the \\u201cevaluation on a novel task\\u201d is done wasn\\u2019t as clear (i.e. when we try to implement a given goal or sequence of goals). More precisely, it is said in several places that the goal z_g is decomposed in sub-tasks where only one of the slots is used as the target. \\n 1. Could you provide more details on exactly how that is done? Do you learn p(z^where) on task later?\\n 2. What happens when some of the \\u201cobjects\\u201d aren\\u2019t achievable or controllable? E.g. I\\u2019d expect that you see a slot which represents the robot arm, is this treated differently?\\n2. How good are SCALOR representations in your environment? \\n 1. It would be very helpful to show samples / traversals in the Appendix.\\n 2. Similarly, comparing to the GT information you provide would be interesting (e.g. try to decode it? I understand you\\u2019d have to match the slots up unfortunately)\\n 3. Did you try continuing training SCALOR in the RL phase? It would make the results stronger and less reliant on a good random exploration strategy.\\n3. Did the hard matching cause issues while learning? I\\u2019d guess that the argmin is not too problematic because it is used in the reward computation only, but if you\\u2019d consider extending this setting to learning the goal proposal function z_g, this seems like a limitation?\\n 1. The explanation about the issue of using tracking as part of the model directly wasn\\u2019t especially clear to me. It might deserve a bit more expansion, especially in the Appendix?\\n4. How complex are the observations of the environment?\\n 1. Could you add more samples of the environment\\u2019s observations?\\n 2. It seems like the environment chosen is extremely similar to the standard Gym Fetch environment, did you try using it instead? https://gym.openai.com/envs/FetchPickAndPlace-v0/ \\n5. It is not entirely true that MONet/IODINE \\u201cdo not contain disentangled and interpretable features like position and scale\\u201d. \\nIt is true that they are not explicitly enforced (like done in SCALOR), but they do arise quite easily purely unsupervised. \\nEspecially, in my experience with both of these models, obtaining (and identifying) \\u201cposition\\u201d latents is rather easy. See for example Figure 5 in [1] and this animation [2].\\n\\nSo in summary, I believe this is a strong paper in a budding field, which deserves publication at ICLR and may interest many people there.\\n\\n* [1] https://arxiv.org/abs/1901.11390\\n* [2] https://twitter.com/cpburgess_/status/1091220207941701632\", \"rating\": \"9: Top 15% of accepted papers, strong accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Object centric learning using SCALOR in goal-conditioned, model-free RL\", \"review\": \"Summary:\\n\\nThe paper combines an existing generative world model (SCALOR, Jiang et al. 2019) with goal-conditioned attention policy. The method is evaluated on object manipulation environments based on MuJoCo (Todorov et al., 2012), Multiworld (Nair et al. 2018) and a Sawyer arm. The paper is clearly written; the authors discuss challenges and motivate their design choices well throughout the paper.\", \"score_justification\": \"The paper is mostly incremental but it provides enough contributions for acceptance. SMORL (Algorithm 1; the proposed method) is well-defined and motivated. The method outperforms a strong baseline (Soft Actor-Critic with Hindsight Experience Replay) in an object manipulation task with available ground-truth state representations. In the visual rearranging task, the proposed method performs better than existing self-supervised RL algorithms: RIG (Nair et al., 2019) and Skew-Fit (Pong et al., 2020) when using 1 and 2 objects. Experiments with a larger set of objects which demonstrate the compositional benefit of SMORL would strengthen the paper. I would also be interested in a visualization of the latent object representations learned by SMORL.\", \"pros\": \"The paper contributes to improving scene decomposition and object representation learning in model-free RL which has practical applications in robotics and object-oriented RL.\\n\\nThere is a discussion of existing limitations and challenges (limitations of VAEs in visual RL, defining reward functions in goal-conditioned RL), and how SMORL is meant to address them (goal-conditioned attention policy to handle set inputs, incorporating goal and object representations in the reward). \\n\\nExperiments in the multi-object environments showing that SMORL might be promising with and without ground-truth representations.\", \"cons\": \"There is no discussion of the computation cost of SMORL in comparison to the baselines (SAC with HER, RIG, Skew-Fit).\\n\\nThe \\u201ccompositional\\u201d aspect is unclear in the Experiments section. How does the \\u201ccompositional generative world model\\u201d translate to productivity, substitivity or other forms of compositional generalization with respect to the objects in the image?\\n\\nExperiments with a larger set of objects would help in highlighting the advantage of SMORL in a general multi-object visual RL setting.\", \"questions_during_rebuttal_period\": \"Please address and clarify the cons above\", \"typos_and_structure\": \"The link in the abstract leads to an empty page\", \"equation_1\": \"G (distribution over the set of goals) is once bolded, once in italic\", \"it_seems_there_are_two_different_citation_styles_used_in_the_paper\": \"(Jiang et al., 2019) and Jiang et al. (2019)\\n\\nAlgorithm 1, line 1: typo \\u201csequences data\\u201d\\n\\nSection header Experiments can be moved to the next page\", \"typos\": \"\\u201cour method scale challenging tasks\\u201d, \\u201cout code\\u201d, \\u201cobjects identities\\u201d, \\u201c2 object\\u201d, \\u201c3 object\\u201d, \\u201c4 object\\u201d\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
eEeyRrKVfbL | Balancing training time vs. performance with Bayesian Early Pruning | [
"Mohit Rajpal",
"Yehong Zhang",
"Bryan Kian Hsiang Low"
] | Pruning is an approach to alleviate overparameterization of deep neural networks (DNN) by zeroing out or pruning DNN elements with little to no efficacy at a given task. In contrast to related works that do pruning before or after training, this paper presents a novel method to perform early pruning of DNN elements (e.g., neurons or convolutional filters) during the training process while preserving performance upon convergence. To achieve this, we model the future efficacy of DNN elements in a Bayesian manner conditioned upon efficacy data collected during the training and prune DNN elements which are predicted to have low efficacy after training completion. Empirical evaluations show that the proposed Bayesian early pruning improves the computational efficiency of DNN training with small sacrifices in performance. Using our approach we are able to achieve a $48.6\%$ faster training time for ResNet-$50$ on ImageNet to achieve a validation accuracy of $72.5\%$. | [
"Efficient Training",
"Multi-Output Gaussian Process",
"Gaussian Process",
"Bayesian",
"Single-shot network pruning",
"Dynamic Sparse Reparameterization",
"Lottery Ticket Hypothesis"
] | Reject | https://openreview.net/pdf?id=eEeyRrKVfbL | https://openreview.net/forum?id=eEeyRrKVfbL | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"i18cduOMgEL",
"QvPk0fYIvIk",
"jaHqZHFt1ZK",
"ZQjTWPdNCrY",
"MyP65VLGan",
"yXXwLd6-Vts",
"0zFLCJnPX5",
"WWmYu3mrEQ",
"nsRNWjJTWUr",
"TutO705QPe0",
"QQsaYZ8Icj",
"m38SFfTc7-Y",
"nMLdqHEWgXG",
"MtvY8vJNhjI"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040352144,
1606305529860,
1606305482140,
1606060059637,
1606024007286,
1605894362863,
1605894075667,
1605893962635,
1605893795435,
1605893603561,
1604303585428,
1604039521508,
1603745989823,
1603696391110
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3477/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3477/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3477/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3477/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3477/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3477/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3477/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3477/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3477/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3477/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3477/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3477/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3477/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This paper considers the problem of pruning deep neural networks (DNNs) during training. The key idea is to include DNN elements only if they improve the predictive mean of the saliency (efficiency of the DNN elements in terms of minimizing the loss function). The objective of early pruning is to preserve the sub-network that can maximize saliency. This optimization problem is NP-hard, and even approximation is very expensive. The paper proves that one can simplify the approximation by ranking the network element by predictive mean of the saliency function.\\n\\nThe proposed approach is novel as most of the prior work on pruning has focused on either (i) pruning on network initialization or (ii) pruning after the network has been fully trained.\", \"couple_of_issues_with_the_paper_are\": \"1. Current approach is somewhat complicated with many hyper-parameters\\n2. Experimental results are not very compelling when compared to pruning on network initialization\\n\\nOverall, my assessment is that the paper takes a new research direction and has the potential to inspire the community, and followup work may be able to overcome the above two issues in future. However, due to the remaining shortcomings, the paper is not judged ready for publication in its present form. I strongly encourage to resubmit the paper after addressing the above two concerns.\"}",
"{\"title\": \"Newest revision\", \"comment\": \"Dear reviewer,\\n\\nWe have beautified references to PruneTrain within our work to further differentiate our work from PruneTrain. See footnote 4, as well as coverage in related work. We have also added inference FLOPs for both PruneTrain and BEP in Table 6 (Appendix E). We find that BEP $1e-1$ consumes $55.2$% of training FLOPs in contrast to $60$% consumed by PruneTrain. BEP $1e-4$ consumes $56.0$% of training FLOPs.\\n\\nWe also note in the Appendix E that it is unclear how to solve the early pruning problem (Eq 3,4) with PruneTrain as it does not allow for a mechanism to precisely control the trained network size.\"}",
"{\"title\": \"Final revision\", \"comment\": \"Dear reviewers,\\n\\nAs the discussion period closes, we have one last revision for you. We have completed our ablation study, varying the hyperparameters of our approach. This can be found in Table 3. We find that in general, all hyperparameters are robust to changes with mild degradation observed only in the extremal settings.\\n\\nWe have also added a slight beautifying change to further differentiate our use case/problem definition from PruneTrain [1] with footnote 4 in the problem statement.\\n\\nWe hope that you will keep the above in mind during internal discussion. Once again, we extend our most humble and gracious thanks for your valuable review and feedback.\\n\\nSincerely,\\n\\nThe authors of \\u201cBalancing training time vs. performance with Bayesian Early Pruning.\\u201d\\n\\n[1] Lym, Sangkug, et al. \\\"PruneTrain: fast neural network training by dynamic sparse model reconfiguration.\\\" In Proc. Int\\u2019l Conf. for HPC, Networking, Storage & Analysis, 2019.\"}",
"{\"title\": \"Review response\", \"comment\": \"For Q3. We would like to clarify that BEP 1e-1 reduces the delta from -1.87% to -1.6% (i.e., a 14% reduction in $\\\\Delta$), we believe that this should be stated in context as the original error is quite small already. This was also the highest pruning scenario considered by the PruneTrain paper, thus we cannot test under a higher performance degradation, which would allow for a more significant gap to emerge. We would certainly like to perform further experiments, however the code for PruneTrain is not publicly available.\\n\\nCould the reviewer clarify what is meant by, \\\"rebuttal shifts its contribution a bit from training time to inference time reduction?\\\" The inference FLOPs in our problem statement is a user specified argument $B_s$ in Algorithm 1, and is not minimized in our approach. Our contribution is to balance the training time vs. performance tradeoff within a user defined inference FLOPs ($B_s$).\\n\\nWith regards to Q3/Q4. Respectfully, we acknowledge the value of a perspective focused on training time vs. performance without controlling for inference FLOPs (i.e. $B_s$). However, our contribution is with regards to training time improvement within a user specified inference FLOPs, $B_s$. This derives from extending the pruning problem definition (Eq. 2) along the temporal dimension (Eq. 3/4). We think that there are practical use cases for this approach such as training for resource constrained devices (i.e. edge devices/mobile phones/customer workstations). We hope the reviewer will consider our contribution in view of the use cases we have addressed.\"}",
"{\"title\": \"Feedback\", \"comment\": \"Thank you for the details. Answers Q1 & Q2 make sense to me.\\n\\nFor Q3, only \\\"BEP 1e-1 and BEP 1e-4 outperform PruneTrain by 0.3% and 0.1%, respectively\\\" under the same inference cost is a solid comparison, however, the improvements are marginal, the method is more complex and introduces more hyper-parameters. \\nComparison with a lower accuracy but higher efficiency is always hard to judge. A training FLOPs vs accuracy may be more valuable.\\n\\nMoreover, the rebuttal shifts its contribution a bit from training time to inference time reduction. This is tricky. The original tone of the paper is about training time reduction as agreed by other reviewers. In Q2, the authors emphasize its training time reduction since the approach cannot outperform previous pruning methods which of course relying on additional training time. This is OK. However, when it's needed to compare methods for training time reduction (Q3 & Q4), the arguments go to inference time reduction. A figure that will make the claims valid is a plot of training FLOPs vs inference FLOPs for all methods under the same accuracy.\"}",
"{\"title\": \"Review responses\", \"comment\": \"Thanks for your suggestions on the notations, related work, and hyperparameters. We have discussed the robustness of hyperparameters under the common concerns and will address your other concerns below:\\n\\nQ1. I recommend the authors to include a table of mathematical notations.\\n\\nWe have provided a Table of Notations (Table 6, Appendix F) in our new draft.\\n\\nQ2. Lots of pruning methods can prune parameters during training before the lottery ticket hypothesis paper appears.\\n\\nWe agree a number of works have considered some form of pruning during training. We have discussed some (but certainly not all) of these works in the Related Work section, including (Narang et. al. 2017, Mocanu et. al. 2018), as well as several variants of dynamic sparse reparameterization including DEEP-R (Bellec et. al. 2018). To the best of our knowledge, these approaches do not generally result in wall time improvement and often require increased training time in order to train networks which allow for efficient inference *after* training. The only exception to this which we were aware of was the work of Dettmers and Zettlemoyer (2019), which shows only a modest (32%) speedup even with a very high percentage (95%) of connection weights being pruned (Table 3 [2]).\\n\\nWe conjecture that these works do not show practical speedup as they focus on connection pruning during training which yields sparse weight matrices. Sparse matrix operations cannot easily leverage parallel architecture of GPUs (footnote 2). \\n\\nQ3. PruneTrain can reduce end-to-end training time of ResNet-50 by 39% without losing accuracy by simply using a previous sparsity regularization method (while this paper has a significant accuracy drop to 72.5% for ResNet-50).\\n\\nWe sincerely apologize for missing a related work (due to its publication in a systems conference, it was outside our field of vision), and thank the reviewer for highlighting it for us. We have added a comparison to this work in Table 5 in Appendix E. To allow for a fair comparison, we trained a BEP network of equivalent inference cost (47% inference FLOPs) as a PruneTrain pruned network. In this setting, BEP 1e-1 and BEP 1e-4 outperform PruneTrain by 0.3% and 0.1%, respectively.\\n\\nWe would like to point out that our reported figure of 72.5% (-3.2%) consumes only 28% of the baseline FLOPs at inference while PruneTrain (-1.87%) consumes 47% (Table 1 in [1]) of baseline inference FLOPs.\\n\\nAlthough we would like to do further comparisons with PruneTrain (e.g., with higher portion of pruned filters), the unavailability of the code makes this difficult. We think it is hard to reimplement their approach in the limited discussion time available. We certainly will endeavor to include further comparisons to PruneTrain in a future draft.\\n\\nTo achieve further improvement in wall time which yields the 39% improvement, PruneTrain proposes an orthogonal technique to our approach where the minibatch size is dynamically increased as a structurally pruned network requires lower memory on GPUs. We assert that this technique is orthogonal to ours and can be combined with our approach as well as with SNIP/GraSP to achieve further speedup.\\n\\nQ4. In Table 3 it is unclear if BEP can outperform SNIP/GraSP or not under the same \\\"Time\\\"\\n\\nWe do not claim that BEP can outperform SNIP/GraSP under the same time. We would like to highlight that BEP captures the *balance/tradeoff* between performance (after training) and time which is not addressed by SNIP/GraSP which solely prune during initialization. In our experiments, we controlled the size of the networks after training (i.e., same inference time). Although it is possible to control for the training time, this would make SNIP/GraSP/BEP to have pruned models with different size, and thus different inference time, which does not appropriately solve the early pruning problem (constraint 3b, $\\\\lvert\\\\lvert m_T \\\\rvert\\\\rvert_0 \\\\leq B_s$). Constraint 3b is important as often training is performed to yield a network of a specific size for usage in resource constrained scenarios (e.g. mobile phones). We have mentioned this use case and importance of constraint 3b in our problem statement in our new draft.\", \"q5\": \"Improvements over SNIP/GraSP.\\n\\nWe highlight that on CIFAR-10/CIFAR-100, our BEP outperforms competing approaches by a significant margin. In our new draft, we have also added a new experiment which prunes a much larger portion of convolutional filters in ResNet-50.\\\\. In this setting, BEP 1e-1 outperforms SNIP by 2.8\\\\%, and GraSP by 1.5\\\\% in Top-1 performance. This may be found in Table 4.\\n\\n[1] Lym, Sangkug, et al. \\\"PruneTrain: fast neural network training by dynamic sparse model reconfiguration.\\\" In Proc. Int\\u2019l Conf. for HPC, Networking, Storage & Analysis, 2019.\\n\\n[2] Tim Dettmers and Luke Zettlemoyer. \\u201cSparse networks from scratch: Faster training without losing performance.\\u201d arXiv, 2019.\"}",
"{\"title\": \"Review responses\", \"comment\": \"We thank you for providing valuable feedback, which we will take into account when revising our paper. We would like to address your questions below:\\n\\nQ1. The reviewer worries that under a series of approximations \\u2026, and thus makes the mathematical proofs irrelevant:\\n\\nFirstly, we wish to emphasize that our goal of providing the proofs is to justify our design decisions such that the performance of the approximations can be guaranteed theoretically. About the empirical quantifications, we have added more experimental results in our new draft to show the predictive performance of MOGP (Fig. 1). For other simplifications in Section 3.3, it is highly non-trivial (at least to us) to quantify how close the approximated pruning solution is to the optimal solution empirically since the original optimization problem (i.e., equations 3-4) is too difficult to solve. Now, we can only verify their performance indirectly via the accuracy of the pruned networks. However, we think it is interesting to explore a principled way of quantifying the performance of these simplifications, which we will consider in the future work.\\n\\nQ2. What is the overhead introduced by the maximum likelihood estimation of the MOGP, and Bayesian early pruning (BEP, Algorithm 1)? Does the benefit of having a pruned model always outweigh the costs of pruning?\\n\\nIn our initial submission, we have presented end-to-end wall time (Table 3) including the time to train the MOGP models. The results show that our proposed BEP (30+ hrs) takes much less time than the ResNet training without pruning (55 hrs). To show the overhead introduced by pruning clearly, we have explicitly delineated the training time into time taken by \\u201ctraining the network\\u201d and \\u201ctime taken by pruning\\u201d (e.g., MOGP and pruning steps in BEP) in our new draft. These results can be found in Table 4.\\n\\nIn some cases, we think it is possible for pruning to increase the overall end-to-end training time due to modeling/pruning overhead. Fortunately, the overhead can be reduced by setting a larger value of T_step in our proposed BEP. \\n\\nQ3. How accurate is the saliency prediction? ... For instance, instead of using MOGP for prediction, one may consider static saliency.\\n\\nTo show the accuracy of the saliency prediction, we added new figures (Fig. 1) in our new draft, which visualize the GP/MOGP predictive mean of the saliency together with the ground-truth saliency values. We can observe that the predictive mean values of MOGP are quite close to the true saliency values. The predictive accuracy of MOGP is much better than that achieved using GP. Also, MOGP is able to capture the long-term trend of saliency curves with significantly less data than GP.\\n\\n\\nQ4. Additionally, experiments should not only have averaged results but also provide standard deviations.\\n\\nWe have provided performance with the standard error in all the results in our new draft. They were previously excluded due to space constraints.\\n\\nQ5. In Algorithm 1, $\\\\mu_{T \\\\mid 1:t}$ is not assigned value anywhere.\\n\\n$\\\\mu_{T \\\\mid 1:t}$ is the predictive mean vector computed using MOGP. We have revised Algorithm 1 in our new draft to clarify this and we apologize for the oversight.\"}",
"{\"title\": \"Review responses\", \"comment\": \"We thank you for providing valuable suggestions and feedback, which we will consider seriously in revising our paper. We have revised the Experiments section so that it is easier to follow. In particular, we would like to address two of your comments below:\\n\\nQ1. I find it hard to understand the intuition behind the dynamic penalty scaling.\\n\\nIn our new draft (Section 4.2), we have included the intuition behind the dynamic penalty scaling as follows: The dynamic penalty scaling is used to increase the penalty if the anticipated compute required to complete training (i.e.,$(T-t)\\\\lvert\\\\lvert m_t \\\\rvert\\\\rvert_{0}$) begins to exceed the remaining amount of compute budget (i.e., $B_{t, c}$). In such a case, we need to focus more on satisfying the budget constraint (i.e., the second term of equation 6) during the optimization, which is achieved by an increased penalty. \\n\\nQ2. It is not clear to me why the dynamic sparse reparameterization (DSR) methods are not listed as baseline in evaluations.\\n\\nWe did compare against DSR with respect to small-scale CNN on CIFAR-10/CIFAR-100. The results showed that our proposed BEP better preserves the performance at equivalent network size. We agree that it would be a valuable addition to compare against DSR for more complex networks such as ResNet. However, different from the small-scale experiment which is used to verify only the accuracy of the pruned model, wall training time is an additional important criterion for measuring the performance of the pruning algorithm for a large-scale network. Due to the heterogeneous implementation (PyTorch vs. TensorFlow) of DSR and other tested algorithms, we are not able to make a fair and accurate training time comparison, and thus removed the DSR from the baselines in the ResNet experiments. We are now trying to resolve this implementation issue and will include the DSR results for ResNet in the revised version of this paper.\"}",
"{\"title\": \"Review responses\", \"comment\": \"We thank you for appreciating our contributions and providing valuable feedback. For your concerns about the dense notation and the proposed algorithm being complicated, we have addressed them under the common concerns. We would like to address your remaining comments below:\\n\\nQ1. Whether the simplification of $m_{t-1} = \\u2026 = m_{T}$ is mild enough.\\n\\nWe do not think that the simplification $m_{t-1}=, ..., = m_{T}$ is mild. Indeed, this is a coarse approximation that we have to make for resolving the optimization problem in a reasonable time. It is a highly non-trivial problem (at least for us) to look for a mild simplification while remaining competitive in wall time, which we will consider in our future work. Also, it is hard to compare the results after this simplification with the optimal solution since the original optimization problem (i.e., equations 3-4) is too difficult to solve. \\n\\nQ2. Strength of experimental results.\\n\\nIn our new draft, we have added a new experiment on ResNet-50, pruning a very large percentage of convolutional filters. This may be found in Table 4, rightmost sub-Table. At this setting, BEP 1e-1 (Top-1, 53.7%) outperforms SNIP (50.9%) and GraSP (52.2%). This is much more in line with our results on CIFAR-10/CIFAR-100. Pruning a significant portion of the network is an important use-case as deep learning models continue to grow larger and thus require considerable pruning to allow training on commodity hardware.\"}",
"{\"title\": \"Common responses\", \"comment\": \"We humbly and graciously thank the reviewers for understanding our contributions and providing valuable feedback which we will consider seriously in revising our paper. Aside from our individual responses to reviewers, we like to address the following common concerns:\\n\\n1. Simplifying Notations\\n\\nIn our new draft, we have made the following changes to improve the notation: (a) In Section 2, we have simplified the description of the saliency function and moved its exact definition to the appendix. This does not affect the readability in terms of understanding our proposed BEP algorithm since it is agnostic to the saliency function, as has been mentioned in the Introduction section; (b) We have removed some details regarding MOGP in Section 3.2 and kept only information that is closely related to our pruning formulation; and (c) We have eased the notational clutter in Lemma 1 and made it more readable. To help readers with understanding Lemma 1, we have clarified its implications in the paragraphs after it. We hope these modifications can help the readers to focus on our main novel idea of Early Pruning. \\n\\n2. On hyperparameters\\n\\nWe would like to clarify that our approach does not have many hyperparameters to be set/tuned. Among the required parameters in Algorithm 1, only two hyperparameters (i.e., lambda and $T_{step}$) need to be set manually; all the others are either user-specified requirements (e.g., budget $B_s$) or parameters with fixed values (e.g., $B_{1,c}$). For these two hyperparameters, we have provided the dynamic penalty scaling strategy for setting lambda automatically and 10-20 works well for $T_{step}$ in practice.\\n\\nNext, we would like to highlight the practical necessity of dynamic penalty scaling in mitigating the difficulty of deciding an appropriate lambda: Due to the recursive problem definition, an approach to solve Early Pruning must solve several optimization problems ($[\\\\hat{\\\\rho_t}]_{t=1,...,T}$). This, unfortunately, would require the usage of several penalties, one per optimization problem. This abundance of hyperparameters is difficult to tune properly and using a single hyperparameter does not work well in practice. To resolve this, we have chosen to use a feedback loop (dynamic penalty scaling) to determine the penalties dynamically. This feedback loop drives the pruning decisions by modulating the penalty $\\\\lambda_t$ (i.e., $\\\\lambda_{dyn}$ in our initial submission) at iteration t using the feedback on lambda\\u2019s efficacy at achieving the desired user-specified sparsity budget $B_s$. Intuitively, the penalty is increased (i.e., the importance of the constraint is increased) if the anticipated compute required to complete training begins to exceed the remaining amount of compute budget. We have shown this derivation more explicitly in Appendix D and cited the relevant literature (PID controllers) which inspired this approach.\\n\\nIn addition, there are secondary hyperparameters (i.e., no. of latent function, variational inducing points) due to our choice of MOGP for saliency modeling as it captures the co-evolution and co-adaptation effects of neural network training. We distinguish BEP hyperparameters from MOGP hyperparameters because in our proposed approach, MOGP may be replaced by any surrogate model which provides a belief over saliency.\\n\\nWe also think that our hyperparameters are robust as similar settings (i.e., derived from our small-scale CIFAR-10/CIFAR-100 experiments) were used in our larger-scale experiments. Thus, we believe that it is possible to use our proposed Algorithm 1 with our recommended settings of hyperparameters and achieve good results.\\n\\nWe agree that it would be interesting to see how $T_{step}$ and the various MOGP parameters would affect the final pruned result, though we have performed validation which indicated relatively good hyperparameter settings for the number of latent functions ranging between 4 and 18, as well as a performance improvement of MOGP modeling over GP modeling (Table 1). In our new draft, we have included a preliminary ablation study which varies $T_{step}$ as well as the variational inducing points of our MOGP model (Table 3). We find that these two parameters are fairly robust to changes. We will attempt to expand this study further within the time of the discussion period.\"}",
"{\"title\": \"Well organized, interesting contribution, experiments could be improved\", \"review\": [\"This paper presents a training-time pruning method for deep neural network. The main idea is to include network elements only if they could improve the predictive mean of the saliency, where the saliency measures the efficiency of the network elements in terms of minimizing the loss function.\", \"The paper presents a clear objective of the early pruning, which is to preserve the sub-network that can maximize the saliency function. This optimization problem is NP-hard, and even approximation is very expensive. The authors proves that one can simplify the approximation by ranking the network element by predictive mean of the saliency function.\", \"The authors evaluated their method empirically and showed that it is superior than the GP modeling, and provides a trade-off between accuracy and the training time.\", \"This paper is well organized. The theoretical analysis is well written and provides a good review to readers who lack relevant background.\", \"The idea is interesting and Lemma 1 should be of value to researchers working on the similar problem.\", \"Reading from the review section, the problem of training-time pruning is not well studied yet, whereas this paper could be seen as an important contribution.\", \"The experiment section is a bit hard to follow. Among other problem, I find it hard to understand the intuition behind the dynamic penalty scaling.\", \"It is not clear to me why the dynamic sparse reparameterization methods are not listed as baseline in evaluations.\", \"Overall I think this is a good paper. The authors could improve it by addressing the two problems in the evaluation section.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Review - Bayesian early pruning\", \"review\": [\"This paper introduces a new method to accelerate training by saliency-based pruning. The method predicts future saliency for neurons based on observed saliency with a multi-output Gaussian process (MOGP), then greedily prunes neurons with least saliency at fixed intervals during training. The authors provide extensive mathematical analysis to show that the algorithm produces pruning mask solutions that are close to the optimum of the formulated optimization (the reviewer is unable to verify). The experimental results showed improvements in task accuracies of trained models but with longer training times.\", \"The reviewer believes that the proposed method is novel, as it considers historical statistics during training to provide a more accurate saliency prediction. While this is interesting, the reviewer is concerned with the practicality. The paper can be improved with answers to the following questions:\", \"The mathematical analysis showed that the algorithm produces pruning mask solutions that are close to the optimum. Is it possible to quantify? The reviewer worries that under a series of approximations and heuristic-based modeling below, the solution no longer aligns with the goal of pruning optimality, and thus makes the mathematical proofs irrelevant:\", \"3.1 problem statement as the pruning objective,\", \"3.2 saliency as MOGP with exponential kernel,\", \"3.3 subsequent simplifications,\", \"4 variational approximation of MOGP.\", \"What is the overhead introduced by the maximum likelihood estimation of the MOGP, and Bayesian early pruning (BEP, Algorithm 1)? Does the benefit of having a pruned model always outweigh the costs of pruning?\", \"How accurate is the saliency prediction? Can this be quantified and illustrated somehow, e.g. with the predicted values on Figure 2 in the appendix? If this is not accurate that one may expect the components in the pruning procedure can be replaced with simpler variants without a detrimental impact. For instance, instead of using MOGP for prediction, one may consider static saliency. (Or is this identical to traditional approaches used by SNIP and GRASP?)\", \"Additionally, experiments should not only have averaged results but also provide standard deviations.\", \"The paper is in general well-written, the reviewer has minor complaints:\", \"In Algorithm 1, `\\\\mu_{T \\\\mid 1:t}` is not assigned value anywhere.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"A push in the right direction\", \"review\": \"Summary\\n\\nThis paper introduces a method for pruning during the training process in order to filter out unimportant/redundant components of the network continuously to speed up training and perform gradual pruning over the training process. The proposed approach is novel in the sense that the vast amount of prior work on pruning has focused on either (i) pruning on network initialization (e.g., SNIP, etc.) or (ii) pruning after the network has been fully trained (e.g., Magnitude Pruning, among many others). The introduced method uses the Taylor-series based saliency criterion (of Molchanov et al., 2017) and uses a multi-output Gaussian process to predict future saliencies and to determine whether a parameter can be safely removed early on during training.\\n\\n\\nRationale for Score\\n\\t\\nAs far as the negatives go, I take issue with the fact that the proposed approach seems to be highly complicated -- requiring a multitude of hyper-parameters/design choices, and tuning functions/ablation studies (e.g., for lambda). The empirical results are also not very compelling -- the proposed approach requires many more training hours than compared approaches that prune on initialization (cf., SNIP or GRASP in Table 3) and attains only a modest (.3% to .5%) improvement in pruning performance as measured by test accuracy.\\n\\nWith that said, I recommend weak acceptance with the hope that this work inspires more research in this area and that the shortcomings will be remedied in subsequent works that build upon the techniques introduced in this paper. I believe that this work has merit in pushing the community in the direction of trying to achieve one of the overarching goals of pruning: an efficient way to simultaneously train and search for an optimized network architecture for a particular application.\\n\\n\\nStrengths\\n\\n- The paper is highly relevant to the ML and optimization communities; the premise that, e.g., filters that would have been pruned anyway after training, should not be trained to save computation time (and improve pruning performance) is very intuitive and appealing.\\n\\n- It is commendable that the authors tackle the very difficult problem of pruning during training and try to model the interdependencies/future uncertainty in a principled way (using MOGP). To my knowledge, there is no other work that attempts to tackle this problem as rigorously this paper does.\\n\\n- The method is overall motivated by principled insights and there is some analysis to justify parts of the method (Lemma 1)\\n\\n- The authors perform evaluations on appropriate benchmarks (ResNet50 trained on ImageNet) and achieve superior pruning results (in terms of test accuracy after training/pruning) relative to those of recently-proposed, state-of-the-art approaches (SNIP and GRASP)\\n\\n\\nWeaknesses\\n\\n- The proposed algorithm is not parameter-free (unlike SNIP, which is virtually parameter-free), is quite complicated (and I imagine difficult to implement), and there is little justification for certain components of the method, e.g., the dynamic scaling function (and choices of lambda), whether the simplification of m_{t-1} = \\u2026 = m_{T} is mild enough. It is not clear to me how a practitioner can run the proposed algorithm in a parameter-free way without having to conduct ablation studies of their own first, especially since, as the authors note, \\u201cWe observed that the penalty parameter was difficult to tune properly, either being too aggressive at pruning, or too passive\\u201d as the justification for the dynamic scaling function\\n\\n- Parts of the paper are too dense and notation-heavy, and this hurts readability and understanding significantly, e.g., Lemma 1, paragraph regarding the introduction of the saliency function on pg. 2.\\n\\n- The presented experimental results are not very compelling. For example, in Table 3, we see that BEP 1e-4 achieves a ~.4% improvement over SNIP and GRASP, at the cost of ~7-8.4 more hours of training time. This calls into question the effectiveness of the proposed approach -- which is, at the end of the day, meant to *speed up* training + pruning by removing unnecessary components of the network early on.\\n\\n\\nClarity \\n\\n- The paper is reasonably well-written and organized overall. It was clear that the authors compressed some of the mathematical expressions/lemmas (e.g., statement of Lemma 1), which is somewhat understandable given the page limit, but this hurt readability and understandability.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"A complex approach without significant improvement\", \"review\": \"The paper proposes a Bayesian-based approach to early prune parameters, which are predicted to have low saliency/importance, with the goal of accelerating the training of deep neural networks. The predictor is a \\\"multi-output Gaussian process\\\" which is computation expensive.\\n\\nThe writing quality and clarity of this paper is OK, but I recommend the authors to include a table of mathematical notations.\\n\\nThe idea of using a Gaussian process based predictor to predict the importance of parameters during the training process is interesting. However, this paper only compares with some methods (e.g. SNIP and GraSP) which prune parameters before training, inspired by the lottery ticket hypothesis paper. Lots of pruning methods can prune parameters during training before the lottery ticket hypothesis paper appears. For example, PruneTrain https://arxiv.org/abs/1901.09290 can reduce end-to-end training time of ResNet-50 by 39% without losing accuracy by simply using a previous sparsity regularization method (while this paper has a significant accuracy drop to 72.5% for ResNet-50). The paper should compare with those more superior methods, regardless of the fact that in Table 3 it is unclear if BEP can outperform SNIP/GraSP or not under the same \\\"Time\\\".\\n\\nMoreover, the method introduces new hyperparameters. To tune the hyperparameters, the method should be run multiple times. I cannot see how it will make training a neural network faster, unless the hyperparameters are super robust to generate, which is not deeply discussed.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
ONBPHFZ7zG4 | Temporally-Extended ε-Greedy Exploration | [
"Will Dabney",
"Georg Ostrovski",
"Andre Barreto"
] | Recent work on exploration in reinforcement learning (RL) has led to a series of increasingly complex solutions to the problem. This increase in complexity often comes at the expense of generality. Recent empirical studies suggest that, when applied to a broader set of domains, some sophisticated exploration methods are outperformed by simpler counterparts, such as ε-greedy. In this paper we propose an exploration algorithm that retains the simplicity of ε-greedy while reducing dithering. We build on a simple hypothesis: the main limitation of ε-greedy exploration is its lack of temporal persistence, which limits its ability to escape local optima. We propose a temporally extended form of ε-greedy that simply repeats the sampled action for a random duration. It turns out that, for many duration distributions, this suffices to improve exploration on a large set of domains. Interestingly, a class of distributions inspired by ecological models of animal foraging behaviour yields particularly strong performance. | [
"reinforcement learning",
"exploration"
] | Accept (Poster) | https://openreview.net/pdf?id=ONBPHFZ7zG4 | https://openreview.net/forum?id=ONBPHFZ7zG4 | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"Vx8os21IWRH",
"4E339XHLmXu",
"WDde9JwcK64",
"rtk7KWBcMay",
"qpCSKNgVwcO",
"Eu3TmidKfKk",
"2Y0Nz-JE8-O",
"OGzLmImcRUW",
"BPgnYJkttxz",
"I0qnTjbh7FN",
"60hO4iO4MKb",
"U1tc890kk5",
"-4eTo3jrZx",
"sAeoh5Wzew4",
"x1btFJ1WXm",
"MRTarz4lnQ7",
"1r-RCJtYHGP"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040391257,
1606252144995,
1606245830349,
1606212371381,
1606212259331,
1605967578505,
1605813266034,
1605727118476,
1605727078508,
1605726989618,
1605726901727,
1605726780137,
1604623992818,
1604225804453,
1603836301228,
1603789596500,
1603779806912
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3475/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3475/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3475/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3475/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3475/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3475/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3475/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3475/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3475/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3475/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3475/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3475/AnonReviewer5"
],
[
"ICLR.cc/2021/Conference/Paper3475/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3475/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3475/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3475/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Poster)\", \"comment\": \"This paper proposes a simple generalization to epsilon-greedy exploration that induces temporally extended probes and can leverage options. The idea and analysis are trivial. Computational results demonstrate when this sort of exploration is helpful. The paper is well written and the authors offer a fair assessment of when these ideas do or do not address challenging exploration tasks. A range of computational results support and offer insight into the concepts.\"}",
"{\"title\": \"Thank you\", \"comment\": \"Thank you for updating your score, we really do appreciate your active engagement. We have updated the revision with some additional information and results based upon your comments and questions.\\n\\nIn thinking more carefully about your question regarding Theorem 1, we realized that we made a notational mistake in the statement of the theorem. Where you see \\u201cO\\u201d it should be \\u201c\\\\Theta\\u201d. This means that the bound is any polynomial in |X| and |A| rather than a polynomial order 1 (linear). We hope this alleviates your concern, since building such a set of options seems more realizable. The worked example on the Chain MDP in which ez-greedy satisfies the assumption may be illustrative. We also argue that, as one adds more covering options, they will eventually satisfy Assumption 1. \\n\\nFinally, we note that Theorem 1 is only one possible way of showing that temporally-extended epsilon-greedy is feasible. The results in the Jinnai et al. paper actually provide another line of argument, as their notion of expected cover time applies directly to temporally-extended epsilon-greedy, and in their case is paired with the option learning algorithm to minimize this quantity.\\n\\nGiven the time left to the deadline we do not think we will be able to get UCB-Q working successfully, but have included our experiments with RMax to provide some additional context in relative performance in the tabular Gridworld domain. We compare epsilon-greedy and ez-greedy with Rmax with the \\u2018known\\u2019 threshold set to 1 (assuming the environment is deterministic) and 10, and observe that ez-greedy compares reasonably well with the latter. These results can be found in Figure 14 (in Appendix G).\"}",
"{\"title\": \"New Score\", \"comment\": \"I would like to raise my score to 6 for the following reasons:\\n1. this algorithm is simple-to-implement and practical for deep RL;\\n2. it brings more attention to exploration with simple options;\\n3. the paper is well-written and clear;\\n4. the authors added extra experiments to address my second and third concerns;\\n5. it provides more thoughts on how we evaluate exploration strategies: overfitting to hard tasks or improving over easier tasks in general.\", \"i_did_not_give_a_higher_score_since\": \"1. more theoretical understanding is needed, e.g., the reason for choosing zeta-distribution;\\n2. the action-repeat strategy is not new;\\n3. the action-repeat strategy can be domain-specific.\"}",
"{\"title\": \"Updated revision\", \"comment\": \"Thank you, we have updated the revision, and hope to upload a final revision before the deadline. We have added the Chain MDP example and discussion in Appendix A of the updated version. We are currently working on determining sufficient conditions for covering options to satisfy Assumption 1. We hope to include this before the deadline.\\n\\nWe have also added a new set of experiments in Appendix F (Experimental Results: Limitations), where we investigate the effect on the gridworld results of adding obstacles and traps at varying density levels to the environment. As we explain in detail in the section, these are done by using procedurally generated gridworlds with obstacles (which block the agent) or traps (which end the episode) at varying densities, while also randomizing start and goal locations. We see that ez-greedy is indeed negatively affected by these modifications, although not as catastrophically as we might have feared. In general, we see ez-greedy performance degrading with increased density until eventually matching that of epsilon-greedy.\\n\\nWe are still working on providing UCB-Q results as requested by the reviewer. We have verified that an RMax agent performs as expected, which is generally better than either epsilon- or ez-greedy. And we are now trying to obtain similar results for the requested method. The superior performance of RMax in this setting is expected, because (1) these are small, tabular, environments where exact counts are possible and (2) this method eventually stops exploring and will thus show better final performance if the competing methods are not adjusted accordingly. We expect the results of UCB-Q to exhibit a similar trend. We highlight that, unlike the proposed algorithm, these methods do not scale well to large or infinite state spaces.\"}",
"{\"title\": \"Updated results\", \"comment\": \"The latest revision includes (in Appendix E) new experiments and discussion around the effects of stochasticity. This includes a series of experiments on two of the small-scale domains (Gridworld and Mountaincar) where we systematically vary the amount of transition noise in the environment and compare epsilon-greedy and ez-greedy. These results are quite interesting and we hope the reviewer will find them informative. The performance characteristics in these two environments show interesting similarities and differences as the amount of noise is scaled up.\\n\\nWe have also now included experimental results (also in Appendix E) on the sticky-action version of Atari, where we compare only Rainbow-based epsilon-greedy and ez-greedy (due to time constraints). We note that, although the benefits of ez-greedy in terms of the summary statistics is slightly reduced with sticky actions (Figure 9), we still observe significant performance improvement on the harder exploration games (Figure 10). In this section we have also included a more detailed discussion of sticky-actions and their relation to ez-greedy action-repeats. Please note that not all of our seeds (3 per agent) have completed in all games, and thus we cut the summary statistics plot at 175 million frames. We will update the figures with the full 200 million frames before the deadline.\\n\\nDeepSea results are now over 30 seeds for all experimental results presented. The difference between the original and updated plots is minimal.\\n\\nRegarding the bar plots being clipped at 400%. You are correct, we clipped the plots here because for both ez-greedy and the other exploration methods the percent improvement for a few games is so large as to make all other bars much too small to see when plotted together. We could report the numeric values in the legend if this is preferable, though one can also observe the difference in performance in these games in the per-game plots shown in Figure 18 of Appendix G.\"}",
"{\"title\": \"Thanks for the clarifications and added plots - looking forward to your new results!\", \"comment\": \"Thank you for the clarifications regarding sticky-actions and the added bar plots, which I found very useful.\\n\\n* Minor comment: In the new bar plots you may be clipping the top performance bars at ~400%.\\n\\n* Regarding relabeling to clarify the plots: I see your point, so I think the way you have it currently is fine. \\n\\nI look forward to seeing your results in stochastic problems.\"}",
"{\"title\": \"Look forward to new experiment results\", \"comment\": \"I want to thank the authors for their efforts to address my concerns. I hope these revisions can be posed to the submission before Nov. 24th and then I can adjust my score accordingly.\"}",
"{\"title\": \"Response to Reviewer #5\", \"comment\": \"Thank you, we appreciate your positive feedback and suggestions where the paper can be improved. The revised paper fixes the typo and adds the suggested citation, which is highly relevant for this work considering the way they factor the embedding of state and action-sequences, and the property of near-uniformity in state transitions from sampled action embeddings. Furthermore, there may be some hope that learning *this* form of option, though not framed as such, could be done more efficiently than in some of the other discussed approaches due to the embedding being state independent. Given the additional space available for revision we will work on extending the discussion around duration distribution for a future revision.\\n\\nYour point about the limitations in the theoretical results are well taken. We expect to add an additional result and discussion that may be relevant to this point (in response to another reviewer) to Appendix A in a future revision and will call your attention to it once available. We are also working on strengthening the theoretical work along the lines of your suggestion, but this is not yet reflected in the current revision. Once we have incorporated this into a revision we will post a message to that effect.\"}",
"{\"title\": \"Response to Reviewer #4\", \"comment\": \"Thank you for your review and for bringing up some interesting points for discussion. The main points you raise focus on the nature of any form of temporally-extended epsilon-greedy, with the final two perhaps mostly applied to ez-Greedy specifically.\\n\\nWe now (in the current revision) reference Table 1 in the main text. Thank you for catching this.\", \"the_cost_of_full_coverage\": \"We argue that full coverage (in the limit) is important, as without this property we are assuming some MDPs will simply never be solved. You are correct that this property comes at a cost, and thus there is a trade-off between generality and efficiency to be made. Indeed, one might argue that our work is moving (gradually) from generality towards efficiency, when compared with epsilon-greedy. Regarding the example in Figure 1, if we know that the agent is following the optimal policy then there is no point in further exploration. However, if we don't know this, then the current path could be sub-optimal or going in the completely wrong direction. Without fully exploring the state space the agent would never know whether or not the current policy is optimal/sub-optimal.\\n\\nEpsilon-greedy, and ez-Greedy, explore forever: When used in practice the epsilon value is often annealed over time, but the essential point is completely valid. Both epsilon-greedy, and temporally-extended epsilon-greedy, continue to explore with some probability indefinitely. This is something we would argue is out-of-scope for this work, despite being an important direction for future work.\", \"bandits\": \"It is unclear how any form of temporally-extended exploration would benefit exploration in bandits, due to the lack of a temporal dependency on previous actions. Perhaps the concept could be stretched to cover this setting, but it feels like a poor fit.\\n\\n\\\"in complex tasks we can not simply repeat an action in different states.\\\": This was our initial thought as well, but the 'complexity' of tasks in which such a simple set of policies are beneficial is larger than we expected (as evidenced by our empirical results). That said, we agree with the spirit of your point here: ez-Greedy will not be effective on all MDPs. We discuss three ways in which ez-Greedy exploration can be detrimental in our section \\u201cGenerality and Limitations,\\u201d but would welcome insights into what other aspects of a complex domain would make ez-Greedy unsuitable. With additional work on option-learning, we believe the use of temporally-extended epsilon-greedy on such environments, where the learned options capture, instead of assume, the structure of the problem, will be quite effective.\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"Thank you for your review and careful consideration of the work. We will attempt to directly address each of your four concerns below.\\n\\n1. The conditions on Theorem 1 may be satisfied, or not, for some combination of a set of options, sampling distribution, and MDP. That is, and perhaps this is your point, knowing if these conditions will hold or not does require some (perhaps significant) prior knowledge about the MDP under consideration. For example, ez-Greedy (which defines the options and sampling distribution) on the standard Chain MDP (in which one of the actions progresses and the other returns to the start state) does satisfy these conditions. But, conversely, if we replace the duration distribution of ez-Greedy with an exponential this is no longer the case because it introduces an exponential dependence on the number of states in the chain. We will add a discussion of this example with the math worked out in Appendix A in a future revision. Additionally, we will attempt to establish sufficient conditions under which the Covering Options of Jinnai et al. (2019) would satisfy Assumption 1 and report back once we have a concrete answer.\\n\\n2. This is a fair concern as our small-scale experiments do not compare with any other baselines than epsilon-greedy. We will attempt to add results for a UCB-based Q-Learning algorithm on the two tabular domains (DeepSea and Gridworld) to Appendix E in a future revision. However, notice that such an algorithm is not immediately applicable to the non-tabular environments, and that the pseudo-counts-based exploration (Rainbow+CTS) was an attempt, by Bellemare et al. (2016), to extend such methods to neural network based RL.\\n\\n3. To address this concern we are extending our small-scale problems to include additional Gridworlds with features that make them more 'adversarial' to the ez-Greedy method. There are two additional Gridworlds: (1) which places obstacles throughout the world which severely limits the utility of action-repeats, and (2) which places random traps throughout the world which penalize over-exploration of known-bad state-actions. We will include these in a revision as soon as possible and post a message notifying you of the update.\\n\\nIn particular, we will report the performance of ez-Greedy as these disadvantage properties are increased, showing how performance degrades as the assumptions are violated more dramatically.\\n\\n4. We hope that our additional result and discussion around your first point will help to motivate the use of the zeta distribution. Additionally, note that Figure 6b (Appendix) shows the performance of the ez-Greedy R2D2-based agent as the mu hyperparameter is varied.\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"Thank you for your detailed review, positive assessment and constructive feedback. We have incorporated your other minor comments in the current revision, thank you again, and below attempt to address your primary comments.\\n\\n1. Deterministic domains: This is a fair concern and we are in the process of running a subset of our Atari experiments, and the entirety of our small-scale experiments, on stochastic versions of the domains - we don't expect ez-Greedy to be particularly affected by this and are positive the additional evaluation will confirm this intuition. We will post an additional message when we have updated the revision with these results. \\n\\n2. Regarding sticky-actions, once we have completed the above stochasticity experiments we will include additional discussion regarding the relation between sticky-actions and the action-repeats used by ez-Greedy. Briefly, sticky-actions do not, in general, improve performance. When exploring with action-repeats (and options in general) the agent observes the actions it takes and is thus able to learn about all the states and actions in an exploratory trajectory. With sticky-actions the agent only observes that action which they *intended* to take, not the one actually executed by the environment. Finally, sticky-actions have fairly short duration due to having an exponential decay in probability. A related concept which can improve performance is changing the base action-repeat number (generally set to 4 in Atari). This value can absolutely be tuned in a domain-dependent manner to improve performance, although the conventional value still appears to be the best fixed choice. However, even here there is a notable difference between using action-repeats (also, options) for exploration, and learning the values of those action-repeat (option) policies and using this for credit assignment and planning. Game-dependent tuning of action-repeats would confound all three effects (exploration, credit assignment, planning), while we can say with more certainty that ez-Greedy is only directly affecting exploration.\\n\\n3. Yes, the error bars for Rainbow+epsilon-Greedy and Rainbow+ez-Greedy partially overlap. We now include in Appendix E the requested bar-plots showing % improvement per-game. Please note, however, that the objective here is to improve on the challenging domains (requiring more exploration) without degrading performance overall. To make this point a bit clearer we have also included the same bar-plots for Rainbow-CTS and R2D2-RND. The results do indeed show that ez-Greedy suffers a small degradation in performance on a small number of games, while offering fairly large improvements on a larger number of games (this pattern holds in both agent settings but differ in the specifics). Meanwhile, we observe that for CTS and RND the number and magnitude of the games for which performance is degraded is much larger, helping to provide context for the summary plots shown in the main text.\\n\\n4. The number of seeds was chosen only to match that of published results, we are running additional seeds and will update the DeepSea results to be over 30 seeds in a future revision.\\n\\nRegarding the \\\"Rainbow (NoisyNet)\\\" label, we are open to this, but want to check with the reviewer on whether they still believe this to be the clearest way of labeling the methods. All the Rainbow-based agents we considered except for Rainbow+CTS include NoisyNets. We remark on this choice in the paper, as it was found that NoisyNets had a small negative effect on the Rainbow+CTS agent. Thus, Rainbow (e-greedy) is actually Rainbow (NoisyNets + e-greedy). Do you still think your proposed naming is the best choice? Another option to improve clarity would be to drop the \\\"Rainbow\\\" and \\\"R2D2\\\" prefix entirely for these plots and specify only the exploration in each case. This seems cleanest, and we would appreciate the reviewer's thoughts on this.\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"Thank you for taking the time to read and review our work. We focus on addressing the weak points you have brought up.\", \"theoretical_analysis\": \"We certainly want theoretical results that are as general as possible under the weakest possible assumptions. Although the impact of our theoretical results will be somewhat limited by their assumptions, they still serve as an important contribution by illustrating how the components of temporally-extended epsilon-greedy influence sample complexity. We believe this adds significantly to the paper because it helps to provide intuition about when option-based exploration will be beneficial.\\n\\nWe were unsure regarding the second two weak points you raised. The concrete method, ez-Greedy, is extremely simple and specified in pseudo-code in the appendix. Would you be willing to clarify so we can better address them?\\n\\nThe duration is sampled from a zeta distribution, which has implementations in many statistical packages (e.g. numpy.random.zipf), but can also be easily approximated with access to the Riemann zeta function (for normalization). In this case a duration n is sampled with probability n^(-mu) / Z(mu), where Z is the Riemann zeta function. We may have misunderstood the reviewer's concern, so please consider replying to clarify.\\n\\nWe would suggest that this concrete algorithm (ez-Greedy) is less an algorithmic contribution and more a simple instantiation of the more general temporally-extended epsilon-greedy approach discussed in the first part of the paper.\"}",
"{\"title\": \"Review\", \"review\": \"**Summary:**\\n\\nThis paper offers a critique of current exploration techniques as being overly complex and engineered to only work on specific tasks. As an alternative, the paper proposes temporally extended $ \\\\epsilon$-greedy exploration which maintains the simplicity and generality of $ \\\\epsilon$-greedy while offering better . More specifically, the proposed algorithm simply repeats the randomly chosen action for a random number of steps (where the number of steps comes from a specific distribution), this is a specific instantiation of the more general algorithm presented in the paper where any set of semi-markov options can be used.\\n\\n--------------------------------------------------------------------\\n\\n**Strengths:**\\n\\n1. Clarity. This paper is very well-written and clear, making it enjoyable to read. It sets up the shortcomings of prior methods and offers a simple solution. I also especially appreciated the clear discussion of the limitations of the proposed method.\\n\\n2. Strong critique of prior methods to provide motivation. It is an important observation that while many exploration methods are developed in the theory and deep RL communities, they are often inferior in practice to simple strategies like $ \\\\epsilon$-greedy. While this is not a novel contribution, this paper really drives home the point by providing a slightly smarter variant of dithering that competes favoriable with much more complicated algorithms. This is an especially important contribution of this paper since it makes the point to the RL community that simple exploration strategies may be more effective in practice, but there is still room to innovate while maintaining simplicity and generality.\\n\\n3. Strong empirical results. The experiments clearly show an improvement over $ \\\\epsilon$-greedy in small benchmark problems. Then, they demonstrate how $ \\\\epsilon z$-greedy even improves over more complocated exploration strategies for deep RL algorithms applies to atari relative to more complicated exploration algorithms like RND (at least in the \\\"average\\\" case, but not on \\\"hard exploration\\\" games like Montezuma's revenge).\\n\\n \\n\\n--------------------------------------------------------------------\\n\\n**Weaknesses:**\\n\\n1. The theory could be tightened. The paper would be stronger if the theorem were stated more formally (defining polynomial sample complexity) and the proof provided the specific results being used from the cited papers (maybe as lemmas in the appendix). At a more substantive level, it is not clear how exhaustive the list of desired properties of an exploration algorithm is. The paper lists three desiderata for an exploration strategy: (1) that it is simple, (2) that it is stationary, and (3) that it promotes full coverage of the state-action space. Each of these goals makes sense, but the paper does not provide any framework to explain why these are a necessary or sufficient set of properties to yield the desired behavior. Moreover, it is not clear what the tension or tradeoffs are between the properties. A more clear discussion of these issues or formal framework could go a long way toward clarifying the landscape of exploration algorithms. \\n\\n--------------------------------------------------------------------\\n\\n**Recommendation:**\\n\\nI reccomend accepting this paper and gave it a score of 8. I think the paper provides a clear argument for simple and general exploration strategies and that $ \\\\epsilon z$-greedy seems to be an algorithm that achieves these goals. Moreover, I think that the paper makes an important point to the community working on exploration algorithms that the complicated algorithms being developed can often be beaten by simple strategies when considering a broad range of problems.\\n\\n--------------------------------------------------------------------\\n\\n**Additional feedback:**\\n\\n- One reference that I think should be included when discussing learning temporally extended representations of actions is [1].\\n\\n- Typo: line 2 of the last paragraph on page 1 should be \\\"such a compromise\\\".\\n- The discussion of the choice of distribution over durations was somewhat abrupt. This is an interesting part of the algorithm and it would be nice if it was fleshed out a bit more. \\n\\n\\n\\n[1] Whitney, W., Agarwal, R., Cho, K., & Gupta, A. (2019). Dynamics-aware Embeddings. *arXiv preprint arXiv:1908.09357*.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"An important problem which is weakly supported by a concrete model\", \"review\": [\"This paper presents a generalized overview of temporally extended e-greedy exploration. Basic principle of temporally extended e-greedy exploration is to apply the e-greedy exploration policy for an extended period of time. Specifically, authors use a heavy-tailed zeta distribution.\", \"Strong points\", \"This paper analyze theoretical properties of temporally extended e-greedy exploration in Theorem 1.\", \"ez-Greedy policy outperforms e-Greedy policy in some experiments environments qualitative and quantitatively.\", \"Weak points\", \"The theoretical analysis is too general under too strong assumptions. Thus, the presented results are not unexpectedly novel.\", \"The presented methods based on the zeta distribution are not concrete enough.\", \"Algorithm is not clearly specified. Thus, it is hard to evaluate the algorithmic contributions of this paper.\", \"Although this paper presents a general analysis on temporally extended e-greedy exploration, the presented ideas are too general. Thus, it is very hard to verify the technical contributions (in terms of models and algorithms) of this paper.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Official Review for Temporally-Extended e-Greedy Exploration\", \"review\": \"##########################################################################\", \"summary\": \"This paper proposes a simple yet general approach for exploration in discrete-action problems. The proposed approach, called ez-greedy, combines randomly selected options with the well-adopted e-greedy exploration policy to achieve temporally-extended e-greedy exploration. The paper overviews the publicized exploration methods from the perspective of their inductive biases, and clearly states where the inductive bias of ez-greedy would be better suited over e-greedy. The paper reports results in tabular, linear, and deep RL settings, on numerous domains ranging from classic toy problems to Atari-57. The results are interesting, and the analysis aligns and supports nicely the narrative of the paper. \\n\\n##########################################################################\", \"reasons_for_score\": \"Overall, I vote for accepting this paper. The idea is simple (a generalization of e-greedy) and the discussions nicely illustrate the main properties of an ideal generally-applicable exploration method. The experiments clearly show where ez-greedy exploration would be useful. Also, they show that the inductive bias of ez-greedy does not hurt much the performance in simpler dense-reward domains while more specialized algorithms suffer significantly. \\n\\n##########################################################################\", \"pros\": \"See \\\"Reasons for Score\\\" above.\\n\\n##########################################################################\", \"cons\": \"1) The results in Atari are based on a deterministic version of Atari (i.e. not using \\\"sticky actions\\\"). Also, in DeepSea the deterministic version of the task is used. Ideally, I would've liked to see empirical results in stochastic domains as well. More importantly, I'm not sure why only deterministic domains are used?\\n\\n2) The literature on action-repeats are discussed briefly. But it's hard to know how the former related works were different in their formulation and use of action-repeats. Also, could you clarify how sticky-actions are positioned w.r.t. ez-greedy (beyond that the purpose behind sticky-actions was to induce stochasticity in the environment as opposed to being used explicitly for exploration)? For instance, do sticky-actions actually improve learning performance in the same domains were ez-greedy improves performance?\\n\\n3) The rainbow + e-greedy vs. Rainbow + ez-greedy Median and Mean plots do not show significant findings. I think a bar-plot should be added to show per-game relative human-normalized improvements for these versions. The same should be done for R2D2 (e-greedy) vs. R2D2 + ez-greedy as well. I think what this could reveal is symmetric bars over the 57-Atari games (i.e. number of games in which ez-greedy outperforms and underperforms e-greedy are the same). Also, the extent of improvements on average is the same as shown in the Mean plot of Figure 8. \\nTo clarify, I don't see an issue with this outcome (i.e. if the bars are symmetric; meaning overall there are as many games in Atari-57 that would benefit from ez-greedy over e-greedy as there are games in which the opposite is the case). This does not go against the narrative of the paper which makes it clear that they each have an inductive bias that suits some tasks over others. But I think this should be made super clear in the results section, through such bar plots. For the same reason, I think the Mean plots should also be brought to the main text and shown next to the Median curves. \\n\\n4) Why only 5 random seeds in DeepSea? I suggest showing results for 30 randoms seeds like in the other toy problems.\\n \\n##########################################################################\", \"questions_during_the_rebuttal_period\": \"Please address and clarify the \\\"Cons\\\" above.\\n\\n##########################################################################\", \"minor_comments\": [\"It would be useful to replace \\\"Rainbow\\\" with \\\"Rainbow (NoisyNet)\\\" in Figure 3 so as to emphasize the difference between \\\"Rainbow\\\" and \\\"Rainbow + e-greedy\\\". Similarly, for \\\"R2D2\\\" it'd make it easier for the reader if the Figures show \\\"R2D2 (e-greedy)\\\".\", \"Table 1: \\\"Algorithm (@200M)\\\": M doesn't need to be italicized (to be consistent with \\\"Algorithm (@30B)\\\").\", \"It'd make it easier if \\\"(100%)\\\" is added to the y-axis of Median/Mean plots.\"], \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Insufficient impact\", \"review\": \"The paper presents an extension of \\\\eps-greedy strategy in order to increase the coverage of exploration in RL problems. The main idea is to take an exploratory option instead of a single action e.g., by repeating an action for n steps, where the duration of repeat is sampled from some distribution. The authors demonstrate that given certain conditions, the algorithm will converge in polynomial time for Q-learning method.\\n\\nOverall, the paper provides a simple yet effective exploration technique for RL methods. However, there are some unclear points regrading the impact of the approach on the filed, I thus vote for a weak reject.\", \"pros\": [\"A simple and universal approach to promote full coverage in reinforcement learning\", \"Scalable and easy to implement\", \"Interesting inspiration from animal foraging behavior\", \"Demonstrates effective performance in conducted experiments\"], \"comments\": [\"The key concern about the paper is whether gaining the full coverage in exploration is beneficial at any cost. In my view, RL favors smarter exploration over the total coverage. For instance , the exemplary scenario in figure 1 shows how the proposed approach increases the coverage of the state space. But why we would need to search the whole space if we are on the right path to the goal. Unnecessary coverage will lead to higher regret and delayed convergence that is the result of naive exploration.\", \"Another problem with \\\\eps-greedy is that it explores forever. Hence, it would be an improvement to stop/reduce exploration at some point rather than intensify it. Assume in the steps close to the goal, the worst action has the same probability to be chosen as the second-best action and it repeats for n times which moves the agent farther from the goal. I would suggest to add evaluation in terms of regret and/or convergence time in such scenarios.\", \"The limitations both enumerated in the paper and the dependency of the approach on some strong conditions on the options seem to be more than the benefits of the proposed method. Besides, it is not general in terms of class of applications; e.g., in complex tasks we can not simply repeat an action in different states.\", \"How does the approach behave in very simple settings, e.g., bandits? I would recommend to add such analysis for online exploration into the paper.\"], \"minor\": \"Table 1 is not referenced in the text\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"A simple algorithm.\", \"review\": \"This paper proposes an easy-to-implement algorithm for the efficient exploration, which is a temporally-extended version of \\\\eps-greedy. Instead of uniformly selecting primitive actions with probability \\\\eps for exploration, the algorithm explores using options. In theory, it has been shown that if the option set is well-designed with a sublinear expected reaching time, the algorithm achieves a polynomial sample complexity. Empirically, the authors tested a simple instantiation, ez-greedy, in multiple environments and claimed that ez-greedy improves exploration and performance in sparse-reward environments with minimal loss in performance on easier, dense-reward environments.\\n\\nI appreciate this work for its motivation and the algorithm is simple-to-implement and not computationally expensive, which points to an interesting direction for future study.\", \"but_there_are_several_concerns_i_have\": \"1. The conditions in Thm. 1, i.e. a sublinear upper bound on visiting time and 1/p(w), is not straightforward for me to realize. How should one construct such an option set if no prior knowledge is given? Do the option-learning methods in [Jinnai et al. 2019, 2020] and [Machado et al. 2017 2018] satisfy these conditions? Does ez-greedy satisfy these conditions? If not, what approximately are the upper bounds for these heuristics? There should be more discussion about this.\\n2. In the tabular RL, it would be more complete if the authors can compare with UCB-based exploration strategies as well, e.g. the UCB-Q as in http://papers.nips.cc/paper/7735-is-q-learning-provably-efficient.\\n3. As mentioned by the authors, the performance of ez-greedy depends on whether the effects of actions differ significantly across states. There should be more adversarial cases to show the possible outcomes of ez-greedy compared with other exploration strategies.\\n4. As mentioned by the authors, action-repeats are not new in deep RL. The novelty of this work lies in using it for exploration with sampled duration. However, the selection of zeta-distribution for duration sampling is only partially empirically demonstrated as well as the choice of \\\\mu. It would be better if the authors can support it with a theoretical justification or some quantitive analysis in terms of e.g., how the value of \\\\mu affects the final performance.\\n\\nI am open to adjusting the score if the rebuttal can address my concerns.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
HK_B2K0026 | Attention Based Joint Learning for Supervised Electrocardiogram Arrhythmia Differentiation with Unsupervised Abnormal Beat Segmentation | [
"Xinrong Hu",
"long wen",
"shushui wang",
"Dongpo Liang",
"Jian Zhuang",
"Yiyu Shi"
] | Deep learning has shown great promise in arrhythmia classification in electrocar- diogram (ECG). Existing works, when classifying an ECG segment with multiple beats, do not identify the locations of the anomalies, which reduces clinical inter- pretability. On the other hand, segmenting abnormal beats by deep learning usu- ally requires annotation for a large number of regular and irregular beats, which can be laborious, sometimes even challenging, with strong inter-observer variabil- ity between experts. In this work, we propose a method capable of not only dif- ferentiating arrhythmia but also segmenting the associated abnormal beats in the ECG segment. The only annotation used in the training is the type of abnormal beats and no segmentation labels are needed. Imitating human’s perception of an ECG signal, the framework consists of a segmenter and classifier. The segmenter outputs an attention map, which aims to highlight the abnormal sections in the ECG by element-wise modulation. Afterwards, the signals are sent to a classifier for arrhythmia differentiation. Though the training data is only labeled to super- vise the classifier, the segmenter and the classifier are trained in an end-to-end manner so that optimizing classification performance also adjusts how the abnor- mal beats are segmented. Validation of our method is conducted on two dataset. We observe that involving the unsupervised segmentation in fact boosts the clas- sification performance. Meanwhile, a grade study performed by experts suggests that the segmenter also achieves satisfactory quality in identifying abnormal beats, which significantly enhances the interpretability of the classification results. | [
"interpretability",
"multitask learning",
"attention mechanism",
"electrocardiography"
] | Reject | https://openreview.net/pdf?id=HK_B2K0026 | https://openreview.net/forum?id=HK_B2K0026 | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"f-eeKCdS88",
"1eQcpQx2mI",
"z6E0ts5WSbB",
"16zUzUM8Ig",
"QzGcQ-thdL",
"7AM3ttMwsxI",
"rFHOnxFFtzA",
"nsYbfQVfIwZ",
"lTXSA9uRWAe"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040501211,
1605542135390,
1605541989505,
1605541837354,
1605541740661,
1603993770455,
1603907806985,
1603897577921,
1603701363627
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3474/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3474/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3474/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3474/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3474/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3474/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3474/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3474/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This paper received 4 reviews with mixed initial ratings: 5, 6, 4, 4. The main concerns of R1, R4 and R2, who gave unfavorable scores, included: insufficient evaluation (lack of experiments on public datasets, small sample size), an ad-hoc nature and overall limited novelty of the method, a number of issues with the presentation. In response to that, the authors submitted a new revision and provided detailed answers to each of the reviews separately. After having read the rebuttals, the reviewers (including R3, who initially gave a positive rating) felt that this work overall lacks methodological novelty and does not meet the bar for ICLR.\\nAs a result, the final recommendation is to reject.\"}",
"{\"title\": \"Response to some comments and questions\", \"comment\": \"1, Regarding the comment \\u201cThe topic seems too narrow for the computer science community.\\u201d\\n\\nWe add extra experiment on other public ECG dataset to demonstrate the generalization of our methods on ECG classification problem. The accuracy and AUC-ROC increases by 0.006 and 0.002 respectively with a segmenter added to the classifier. Meanwhile, we also observe a promising segmentation result. \\n\\n2, Regarding the comment \\u201cBut (image) segmentation works are also worth (or even more) investigating.\\u201d \\n\\nIt\\u2019s worth noticing that our work focus on unsupervised segmentation. It\\u2019s true that the modification of image segmentation models can fit in ECG segmentation, and this is actually how some works concerning ECG segmentation do. However, they all need annotations, as we mentioned in paragraph 2 in section 1, while in our work we focus on unsupervised segmentation. We notice that there are emerging works concerning unsupervised image segmentation methods very recently. Yet due to the very different nature between nature images and ECG signals, these unsupervised methods can hardly be directly applied to ECG segmentation. \\n\\n3, Response to the concern about the evaluation of segmentation \\n\\nFor evaluation of segmentation results, the example we gave in Fig 4 is the illustration of the three classes in the grade study. Actually, we asked independent expert cardiologists to do blind grade study on the segmentation result (100 ECG segments) and the result is shown in Fig 5. We think it would be too much work for a conference paper to conduct multi-site study (and it is very rare too). \\n\\n4, More details about data preprocessing \\n\\nFor data prepcocessing, we use butter filter to build a low-pass filter with threshold frequency of 60. What\\u2019s more, we apply a low pass FIR filter to remove the baseline drift and the cutoff frequency is 4. \\n\\n5, Response to \\u201cIn figure 3, are there duplicate attention maps in every column?\\u201d \\n\\nYes, there are. We enforce the output of U-Net to have only one channel and duplicate it into 12 copies so that the attention maps for 12 leads are exactly the same. This is because the arrhythmia occurs synchronously for the 12 leads.\"}",
"{\"title\": \"Response to some comments and questions\", \"comment\": \"1, Response to the concern about the experimental evaluation\\n\\nActually, Moskalenko et al. (2019); Oh et al. (2019) deals with ECG segmentation problem and the Pan-Tompkins algorithm is used for finding QRS complexity\\u2019s position in an ECG signal, and none of them can be used for ECG classification. Table 1 lists the comparison of classification results of different methods, which are more related to the works mentioned in the first paragraph of section 1. The commonly used models for ECG classification include CNN and CRNN. Different works make problem-specific modification to CNN or CRNN for the target problem/dataset and there does not exist a state-of-the-art approach for the PVC differentiation problem. The \\u201cclassifier only\\u201d in Table 1 stands for the CNN baseline similar to the state-of-the-art in many other ECG classification problems, while we get poor classification performance with CRNN so that we didn\\u2019t present it. Note that Hong et al., (2019) actually uses a similar baseline for comparison without referring to a specific previous work. \\n\\n2, Response to the two questions about PhysioNet \\n\\nRegarding PhysioNet, we did run experiments on a dataset derived from public MIT-BIH dataset to classify ECG segments with atrial premature beat (APB) and ones with premature ventricular contraction (PVC). For this task, the baseline method only using CNN models achieves almost 99% accuracy, so the necessity for adding a segmenter is minor. Still, we do observe improvement of classification performance with our methods and a promising segmentation result. We will add it to the new version of our paper. On the other hand, the problem of differentiating subclass of PVC is more challenging with low classification accuracy, so there is a large room for improvement. As for transfer learning, each ECG classification problem is quite unique in terms of features and PVC differentiation is among the most difficult ones. Therefore transfer learning does not work well. \\n \\n3, Regarding how the train/val/test splits are organized \\nIt\\u2019s a good point to mention physiological signals\\u2019 large individual variation. Actually, for each patient, there are at most three segments and in many cases one patient has only one segment. When doing k-fold validation, we split patients not segments into k folds.\"}",
"{\"title\": \"Response to some comments and questions\", \"comment\": \"Thank you for your advices, we will edit our paper trying to make our writing more understandable.\\n\\n1, Regarding the comment \\u201cthe proposed approach seems rather ad-hoc\\\" \\n\\nI agree that combining segmentation and classification is not a novel invention. However, our contribution is a new way of unsupervised learning through the assistance of supervised learning on a related but different task. By combining unsupervised segmentation and supervised classification in the form of attention maps directly on input images, which provides some explainability to the classification task. \\n\\n2, Regarding the comment \\u201cthere is no systematic evaluation how the method compares to other state-of-the-art ECG classification methods\\u201d \\n\\nAs for comparison to other state-of-the-art ECG classification methods, when focusing on classifying a segment of ECG segment, there are two types of widely used methods, which are CNN and CRNN (combination of CNN and RNN). In Table 1, the \\u201cclassifier only\\u201d represents the CNN baseline. We also tried CRNN method but the accuracy is only around 70% due to the challenging nature of the PVC differentiation problem as explained in Section 1. Hence we chose not to present the result.\", \"response_to_detailed_comments\": \"i. What is the output of the classifier? Is this a binary label? Or a multi-class label? \\n\\nThe labels are binary. Please kindly refer to Section 5.1, where we have made it clear \\u201cWe evaluate the performance of all methods on two tasks: differentiating PVC originating in LV and RV, as well as PVC originating in LVOT and RVOT\\u201d. \\n\\nii. \\u201c\\u2026 the output of S has only 1 channel and we expand it channel-wise so that it matches the channel dimension of the ECG signal \\u2026\\u201d \\u2013 What exactly is meant here? In Fig 1 it seems that the segmentation output has naturally 12 channels? Should the segmentation be identical for all channels? \\n\\nThe segmentation result should be identical for all channels since the abnormality occurs at the same time for all 12 leads. To enforce that, we set the output of the segmenter to have only one channel and duplicate it 12 times, which is called \\u201cexpand\\u201d in the paper. After that, we apply a pooling layer. In Fig 1, the after-pooling attention map is not the output of segmenter s. We should make it clear in Fig-1 that the segmenter should not contain the two-step postprocessing afterwards. \\n\\niii. \\u201cWe do not use the output of the segmenter L as the attention map directly but instead perform a pooling with large kernel size first\\u201d \\u2013 Why is this done? What does \\u201clarge kernel\\u201d mean? \\n\\nWe explain it shortly afterwards \\u201cout of consideration for both interpretability and performance\\u201d. We would like to generate a window-like attention map so that the abnormality area is uniformly highlighted in contrast to normal beats. Besides, if the attention weight varies in the \\u201cwindow\\u201d, the shape of abnormal beats would be distorted. As for \\u201clarge kernel\\u201d, traditional 3*3, 5*5 max pooling layers\\u2019 kernel size is 3 and 5. Global max pooling\\u2019s kernel size is the shape of input signal. In our case, the kernel size is almost half of the beat length (e.g. 200). Compared to traditional 3*3 max pooling, our pooling has pretty \\u201clarge\\u201d kernel. \\n\\niv. Where is the attention map in Fig. 1? \\n\\nThe attention map is marked by the orange line segment, the input to the Hadmard production. \\n\\nv. How are the Premature Ventricular Contraction (PVC) origin labels defined? Is that a single time point (per channel or common for all channels) or a time window? \\n\\nThe PVC origin label is segment-wise, which means for each segment (12-leads) there is only one label denoting whether there are LV or RV (LVOT or RVOT) beats.\"}",
"{\"title\": \"Response to some comments and questions\", \"comment\": \"1, Regarding the comment \\u201cThe work is light on theory and the contribution mostly resides on the empirical improvement\\u201d\\n\\nOur work\\u2019s contribution in theory is providing a new way of unsupervised learning through the assistance of supervised learning on a related but different task. In this particular problem, it is done by backpropagating the supervised classification loss to the unsupervised segmenter. Besides, we propose an explicit attention approach directly on the input signal, different from those in the literature which apply attention on intermediate features, for better explainability of the results. We further discuss why pooling is important for the attention map. \\n\\n2, Regarding the comment \\u201cthe evidence for this improvement is not rock solid, as it is shown on a single dataset, which has a rather small sample size\\u201d \\n\\nFor more generalized conclusion, we have added the comparison results between our method and the baseline on a public dataset MIT-BIH to the new vision of our paper in section 4. The accuracy and AUC-ROC increases by 0.007 and 0.005 respectively with a segmenter added to the classifier. The dataset has total samples of almost 3000 and the baseline already reaches 0.98 (accuracy) and0.99 (AUC-ROC), so we think the improvement is acceptable. Meanwhile, we also observe a promising segmentation result. \\n\\n3, Response to the question about hyperparameter selection \\n\\nIn terms of hyperparameters, both learning rate and architecture parameters are fixed independent of the dataset (we used the same setting in the new experiment on the public dataset). The kernel size linearly depends on the sample frequency of the dataset, which is natural. For different datasets, we can normalize the sampling frequency to use the same kernel size, as has been shown in the newly added experiments on the public dataset. \\n\\n4, Response to the \\\"not really significant improvement\\u201d \\n\\nLastly, thanks for providing a new perspective on evaluating the solidity of our result. Actually, when n=500 and assume the true accuracy is 90%, the confidence interval with 95% confidence is +/- 2.62%. I can understand your concern that the margin between the accuracy of our method and baseline is smaller than the 95% confidence interval. However, the hypothesis behind this calculation is that test results for different samples are independent, which is not true in our case. This could lead to smaller deviation. Besides, it\\u2019s hard to attain 5% accuracy increase anyway when the baseline accuracy already reaches 90%. On the other hand, this confidence interval theory only applies to accuracy not for specificity, sensitivity, and AUC-ROC. Therefore, by showing the comparison of these metrics, we can also conclude performance improvement with our method.\"}",
"{\"title\": \"Empirical evidence has some loopholes\", \"review\": \"This manuscript contributes a neural architecture to classify arrhythmia type from ECG data. The signal treated as 1D, and the architecture performs joint segmentation-classification detecting the abnormal beats and then classifying them as a function of their origine. It uses U-nets for segmentation and, for classification CNN and one fully-connected layer. The unet segmentation generates weights that are considered as an attention map and multipled with the original time series after pooling on a window (which amounts to smoothing).\\n\\nCompared to the prior art, the central contribution put forward is the addition of the segmentation component of the architecture.\\n\\nThe work is light on theory and the contribution mostly resides on the empirical improvement. However, the evidence for this improvement is not rock solid, as it is shown on a single dataset, which has a rather small sample size. Also, I fear that hyper-parameters are not set fully independent of the final error measure.\\n\\nHow are hyper-parameters (such as learning rate or architecture parameters) chosen? Given the procedure exposed in section 5.2, it seems to me that some of the architecture parameters (kernel size) where not chosen independently of the test set. Such choice will incur a positive bias with regards to the actual expected generalization error.\\n\\nWith n=500 and an accuracy of 90%, the p=.05 confidence interval of a binomial model is 5%. Hence, the improvements observed by adding the segmentation on top of the classifier do not seem really significant.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"CNN-based approach for segmentation and classification of ECG signals which is quite ad-hoc and limited novelty\", \"review\": \"The paper proposes a framework for the classification of arrhythmias in electrocardiogram (ECG) data. The proposed approach performs segmentation and classification of the ECG signal. The segmenter performs segmentation of the signal (also called attention map) even though the term segmentation is not quite correct. This attention-modulated signal is then classified to identify the origin of Premature Ventricular Contraction (PVC). The proposed approach is evaluated on a dataset from a single machine consisting of 508 segments (I am not sure what \\u201csegments\\u201d means in this context). The results seem ok, but it is not clear to me what level of performance is required in order to achieve a similar level of performance as an expert.\", \"main_concern_is_that_the_proposed_approach_seems_rather_ad_hoc\": \"The combination of segmentation (or attention) and classification in a joint fashion seems hardly new and while the results obtained are good, there is no systematic evaluation how the method compares to other state-of-the-art ECG classification methods. Another problem is that the writing in the paper is not always clear and it is often unclear what exactly the authors are doing. As a result, it is quite difficult to exactly assess what the authors have done or what they mean.\", \"detailed_comments\": \"\\u2022 What is the output of the classifier? Is this a binary label? Or a multi-class label?\\n\\u2022 The authors write \\u201c\\u2026 the output of S has only 1 channel and we expand it channel-wise so that it matches the channel dimension of the ECG signal \\u2026\\u201d \\u2013 What exactly is meant here? In Fig 1 it seems that the segmentation output has naturally 12 channels? Should the segmentation be identical for all channels?\\n\\u2022 \\u201cWe do not use the output of the segmenter L as the attention map directly but instead perform a pooling with large kernel size first\\u201d \\u2013 Why is this done? What does \\u201clarge kernel\\u201d mean?\\n\\u2022 Where is the attention map in Fig. 1?\\n\\u2022 How are the Premature Ventricular Contraction (PVC) origin labels defined? Is that a single time point (per channel or common for all channels) or a time window?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"This paper presents an semi-supervised approach for ECG segmentation and PVC classification. The application is well motivated. I have some concerns about the experimental evaluation and novelty described below. I think it has the makings of a promising paper but would like to see responses to these questions.\", \"review\": \"This paper presents a method for segmentation and classification of ECG data applied to the task to segmenting and detecting Premature Ventricular Contractions (PVC). The taks is semi-supervised, in the sense that segmentation labels are not required by labels for the PVC events (classification) are used.\\nThe authors motivate this application quite well and detecting abnormalities in ECG signals is an important task of clinical relevance. I can understand why segmentation labels may be very laborious to collect and unsupervised methods would be desirable.\\n\\nThe proposed approach builds upon U-Net and introduces some task specific changes. However, I would argue that this is primarily an application paper. I don't mean that as a criticism necessarily, I think that strong and well motivated applications of machine learning are important and informative. However, it would be helpful if the authors could discuss more about how their approach might generalize to other tasks, both the detection of other types of arrythmias and other temporal segmentation and classification tasks. \\n\\nMy main comments regarding the paper are around the experimental evalutation. The authors highlight that there are some published baselines for this task or at least similar related works (e.g., Moskalenko et al. (2019); Oh et al. (2019)) and/or the authors could have applied classification on top of features extracted using Pan-Tompkins - but that would be a more crude baseline. While I recognize that these approaches might not enable unsuperivsed segmentation and so direct comparisons on that might be hard with the full approach they propose. It might be possible to present a comparison of classification metrics on their own. Perhaps I am misunderstanding but it doesn't seem as though Table 1 includes such a comparison, rather the baselines are different from the previous published methods - is that correct? I would almost describe Table 1 as ablation results rather than a comparison with other published baselines. I'd like to know the author's response to that and if Table 1 does show these results perhaps linking the rows to the previous approaches might be helpful? Or justifying why it isn't appropriate to show these comparisons. I don't say this just because the authors should show better numbers, but rather to ground the chose baselines in the context of previous work in this space.\\n\\nBuilding from the previous point. I think this paper would be an excellent case for for showing transfer learning results, it seems to me that PhysioNet provides a large amount of available data for ECG classification. A couple of question I'd like to hear the authors responses to:\\n1) Why did they not do any experiments on these public datasets? Is there a reason they are not appropriate? Do they not have the right labels, are they not large enough, do you need full 12 lead recordings (I am not sure if they are avaiable on PhysioNet datasets - but I imagine so.)\\n2) Even if training your method on your dataset is preferable, it would seem natural to test it on a set from PhysioNet, perhaps even with a different type of arrythmia, to see how much performance degrades? This I think would be most informative, both showing segmentation and classification results.\\n\\nFig. 3 is a nice illustration, but it is quite difficult to read. I might suggest reorganizing it. I am not sure showing multiple leads is necessary and maybe limiting to two columns might help. I'd encourage the authors to leverage supplementary material to show more examples as I do think these help. \\n\\nFinally, physiological signals are notorious for having large individual variation. I'd be interested to have the authors discuss more about this. I couldn't find the information about how the train/val/test splits were organized and whether this was person independent etc. The following sentence in Section 4.2 \\\"We apply five-fold cross-validation with different classes evenly distributed between folds, and the average performance is reported\\\" doesn't seem to mention that. Knowing more about the splits would be very helpful. This is perhaps another reason that performing experiments on at least one PhysioNet dataset would be helpful as the train, val, test splits could be released. But I acknowledge that the authors say they will release their data which is good.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting paper, but the topic is too narrow; related image segmentation works needed\", \"review\": \"This paper proposes a deep neural network for Premature Ventricular Contraction (PVC) differentiation and segmentation from electrocardiogram (ECG) signals. The network is jointly trained as a segmenter and a classifier with a multitask learning manner. Differentiation is achieved by the classifier, and segmentation is achieved by pooling for window-style attention from segmenter\\u2019s output. Quantitative experiments show better performance than baselines on differentiation tasks. Qualitative experiments show the effectiveness of segmentation tasks.\\n\\nThe results look interesting, and it might have a broader impact on practical usage for AI models in the clinical environment. However, my concerns are: \\n\\n1) The topic seems too narrow for the computer science community. More likely a paper of the biomedical engineering community or computing cardiology community. The proposed method also lacks in-depth technical/theoretical analysis; thus the paper novelty is limited. \\n\\n2) The related works include multitask learning and attention mechanisms. But (image) segmentation works are also worth (or even more) investigating. Just a simple modification of image segmentation neural networks (such as Conv2D -> Conv1D) can make them suitable for ECG segmentation tasks. \\n\\n3) For the evaluation of segmentation, only several cases of qualitative evaluations are not convincing. At least, a comprehensive user study by a community of cardiologists is needed.\", \"some_questions\": [\"Could you provide more details about data preprocessing? Which filters do you use? What are the cut-off frequencies for high-pass filter and low-pass filter?\", \"In figure 3, are there duplicate attention maps in every column?\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
5i4vRgoZauw | Wiring Up Vision: Minimizing Supervised Synaptic Updates Needed to Produce a Primate Ventral Stream | [
"Franziska Geiger",
"Martin Schrimpf",
"Tiago Marques",
"James J. DiCarlo"
] | After training on large datasets, certain deep neural networks are surprisingly good models of the neural mechanisms of adult primate visual object recognition. Nevertheless, these models are poor models of the development of the visual system because they posit millions of sequential, precisely coordinated synaptic updates, each based on a labeled image. While ongoing research is pursuing the use of unsupervised proxies for labels, we here explore a complementary strategy of reducing the required number of supervised synaptic updates to produce an adult-like ventral visual stream (as judged by the match to V1, V2, V4, IT, and behavior). Such models might require less precise machinery and energy expenditure to coordinate these updates and would thus move us closer to viable neuroscientific hypotheses about how the visual system wires itself up. Relative to the current leading model of the adult ventral stream, we here demonstrate that the total number of supervised weight updates can be substantially reduced using three complementary strategies: First, we find that only 2% of supervised updates (epochs and images) are needed to achieve ~80% of the match to adult ventral stream. Second, by improving the random distribution of synaptic connectivity, we find that 54% of the brain match can already be achieved “at birth" (i.e. no training at all). Third, we find that, by training only ~5% of model synapses, we can still achieve nearly 80% of the match to the ventral stream. When these three strategies are applied in combination, we find that these new models achieve ~80% of a fully trained model's match to the brain, while using two orders of magnitude fewer supervised synaptic updates. These results reflect first steps in modeling not just primate adult visual processing during inference, but also how the ventral visual stream might be "wired up" by evolution (a model's "birth" state) and by developmental learning (a model's updates based on visual experience). | [
"computational neuroscience",
"primate ventral stream",
"convolutional neural networks",
"biologically plausible learning"
] | Reject | https://openreview.net/pdf?id=5i4vRgoZauw | https://openreview.net/forum?id=5i4vRgoZauw | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"WP8MKnPBetp",
"ZLa-QoRZoaj",
"FR9Pg100AJ_",
"mLOqe2iMKBY",
"LDDTX0DUsft",
"XPIhGBJPtmr",
"dZsRkolWip",
"_8B3rNPcTVp",
"beqW60hA6wi",
"ukxUIgf6KH5",
"CyRW9Pv0e6",
"x69s-QVkUXe",
"6edoeUPGvl",
"pk3Rx0YjuPx",
"nqpB8mapTJD"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040383255,
1606268349688,
1606268331280,
1605977432078,
1605769655576,
1605727833114,
1605727812892,
1605727741348,
1605727642778,
1605727616407,
1605727472664,
1604470836366,
1604432109796,
1604157747302,
1603898846905
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3471/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3471/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3471/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3471/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3471/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3471/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3471/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3471/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3471/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3471/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3471/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3471/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3471/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3471/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This paper received 2 borderline accepts, 1 accept, and 1 reject.\\n\\nThis paper was discussed on the forum and no consensus was reached. The two reviewers who rated the paper as borderline accept emphasized that the biological claims are overblown, that the intellectual contributions (the initialization scheme and partial training) are incremental from a statistical learning perspective, and that the potential applications for the future (like alternate learning rules) are too speculative. I agree with both of these reviewers (and the negative reviewer) that the biological rationale is problematic and the approach is not credible as a model of biology. It is not evaluated as a computer vision model either. And I completely agree with the point raised by several reviewers that there is simply no data about how many synaptic updates to target. Hence, statements regarding % of total synaptic updates and % of brain matches seem empty without a precise target. For all these reasons, I recommend this paper be rejected.\"}",
"{\"title\": \"Combined Response 2/2\", \"comment\": \"## Biological rationales of proposed methods\\n\\nR1, R2, R3 have questioned the use of weight compression (WC) as a model of primate post-natal development. Since weight compression bases distributional clusters on trained weights, a similar biological mechanism is difficult to imagine. It is therefore also unclear under what measures WC can be fairly compared to standard Kaiming Normal initialization.\\n\\nWe do not claim that weight compression is how evolution found the at-birth synaptic connections. In our study, we are simply using alternative models to explore the hypothesis that evolution may have discovered an initialization strategy that leads to higher behavioral performance and higher match to the ventral stream than current initialization distributions. Because of the genomic bottleneck, the initialization cannot specify all the developed weights, so there is a trade-off between capacity of information in the genome and at-birth goodness of neural representations. Our results reveal that with nearly identical capacity, an alternative initialization distribution (relative to that the one typically used) leads to networks that are more brain-like prior to training. Specifying the weights in a compressed distribution is a significant improvement over Frankle et al. 2019 where it needs to be specified for every single weight whether it needs to be trained or not. Our study\\u2019s contribution is not to prove that the brain uses such a strategy to set up its initial synaptic distribution, but to reveal a new space of possibilities (hypotheses) that should be considered -- the hallmark of any good modeling study. We have clarified this in the paper. \\n\\nWe have also run additional statistical analyses following R1\\u2019s remarks to confirm that WC (54+/-1.5%) indeed improves over standard Kaiming Normal initialization (43+/-1.7%, n=10 seeds; permutation test p<1e-5).\\n\\nPlease see the individual reviewer threads for more detailed responses.\"}",
"{\"title\": \"Combined Response 1/2\", \"comment\": \"We sent individual responses to all reviewers to potentially be able to discuss directly through OpenReview, but also wanted to post an overall response to address common criticisms.\\n\\n## ANNs as models of development\\n\\nR1 and R3 raised concerns over the analogy of artificial (deep) neural networks to the development of the primate visual system. R1 for instance states \\u201cno neuroscientist is claiming that a deep neural network is a complete and accurate model of the (development of the) primate visual system\\u201d and R3 adds that our study \\u201coperate[s] under the premise that visual circuitry develops purely via \\\"supervised\\\" learning.\\u201d\\n\\nWe absolutely agree that no current artificial neural network (ANN) is a complete model of the primate visual system and its development \\u2014 that is precisely our motivation for trying to find alternative ANNs that are more brain-like. In this study, to better compare ANNs to the development of the primate ventral stream, we make the explicit commitment of ANNs initialization to \\\"birth-state\\\" and ANN training to experience-dependent learning. We then focus on changes to these two stages that may be more in line with biological post-natal development. Current deep ANN models of the visual system are criticized for being non-biological due to their reliance on an excessive amount of supervised weight updates (e.g. Grossberg 1988, 2020, Marcus 2004). While other studies have focused on other aspects such as investigating biologically plausible implementations of supervised learning (e.g. Lillicrap et al. 2016, Scellier et al. 2017) or very recently self-supervision (Konkle et al. 2020, Zhuang et al. 2020), we here studied the amount of training (both in labeled images and in synaptic updates) required to achieve adult states that are still very brain like and propose initialization and training procedures that reduce the amount of training required. We do not argue that the changes we have proposed are fully biological models of post-natal development, only that they more concretely correspond to biology than current models. While, solving the entire development problem all at once is too much for one study, here we take the first steps in this direction, laying the ground for future work. \\nWe have updated the paper and made the framing of this partial approach of improving -- but not yet accomplishing -- models as hypotheses of biological development more clear. We have also expanded the related work section to discuss approaches addressing other ANN shortcomings of biological learning in more detail.\\n\\n\\n## Brain-Score as a metric for evaluating alignment of models with the primate ventral visual stream\\n\\nR2 points out a limitation in the Brain-Score benchmarks that were used to quantify the match between models and primate ventral stream. Since they are based on a limited number of datasets, the scores might not generalize to new datasets and the wording did not make this clear. \\n\\nWe have updated the text to make the use of the benchmarks in this study as limited proxies more explicit, but also to point out results that support the generalization of even the current set of Brain-Score V1, V2, V4, IT, and behavior benchmarks. Most importantly https://papers.nips.cc/paper/9441-brain-like-object-recognition-with-high-performing-shallow-recurrent-anns, Fig. 2, showed that model scores generalize to new images and new primates/recordings. Figure 1 in our study also addresses one version of an ablation experiment: reducing the model\\u2019s developmental process leads to a considerable spread in the scores (reducing them as far as 20% of the original model\\u2019s score), but perhaps also not as severe of an immediate reduction with fewer updates as many in the field would have thought. As R4 nicely pointed out, the techniques themselves seem to generalize well to reasonably similar architectures (Figure 5A).\"}",
"{\"title\": \"Follow-up Response\", \"comment\": \"Thank you for your quick response and the opportunity to discuss this concern! Your clarification was very helpful -- if we understand correctly, there are two related overarching concerns:\\n\\n 1. **Wording**: 80% match on Brain-Score is not necessarily the same as 80% match to the brain due to the limited number of datasets. The current text does not make this clear. We completely agree with this point and will update the language in the paper to make this more explicit.\\n\\n 2. **The \\u201cgoodness\\u201d of Brain-Score**: It is unclear to what extent the current benchmarks on Brain-Score are aligned with the \\u201creal\\u201d representational similarity to the brain.\\nWe acknowledge this criticism but would also like to push back on some positions. This point is perhaps a bit more philosophical, so we will try to explain in more detail where we are coming from.\\n\\n2.1 The \\u201creal\\u201d representational similarity to the brain can be operationalized as the match of a model on all benchmarks and datasets that could ever possibly exist. These benchmarks could employ very different metrics, stimuli, or recording techniques than the ones in use today. Either way, this (extremely large) set of benchmarks contains the current (very small) set of benchmarks on Brain-Score, so there has to be some alignment. The generalization of scores to new stimuli and primate recordings as shown in Fig. 2, https://papers.nips.cc/paper/9441-brain-like-object-recognition-with-high-performing-shallow-recurrent-anns, makes us hopeful that this alignment is fairly substantial. If a specific ANN can perfectly predict brain responses, it thus has to employ similar representations to the brain.\\n\\n2.2 Quantifying what Brain-Score=1.0 on a subset of benchmarks actually means with respect to the \\u201ccomplete\\u201d set of benchmarks (2.1) is impossible to address until we have collected all the data we could ever hope for. At that point, the benchmarks are perfectly aligned with the \\u201creal\\u201d representational similarity by definition. In practical terms, we can approximate this alignment with held-out benchmarks (as mentioned in 2.1) and we see no other way forward than to work with the benchmarks at hand, while adding more and more benchmarks that break the models. As far as we are aware, the benchmarks in Brain-Score are the most extensive set of primate ventral stream benchmarks that is currently readily available.\\n\\n2.3 It is thus in our view impossible that a high score on even the current Brain-Score benchmarks can be achieved with absolutely no \\u201creal\\u201d representational similarity. In the limit of infinite data outlined above, a score of 1.0 would require an exact copy of the brain.\\n\\n2.4 To offer a practical suggestion that could perhaps serve to connect the functional scores to anatomy: we could test how well the model would score if its mapping of layers to regions were jumbled up. That is, what would the score be if we used the V1 layer to predict IT and the IT layer to predict V1? Our prediction would be that this anatomical mismatch should lead to decreased scores -- likely still well above zero because V1 and IT also predict each other in the brain, but it might at least give a little more confidence in a qualitative match between model and brain processing.\\n\\nEither way, we appreciate your constructive criticism and will definitely work the points from this discussion (that we are happy to continue!) into the final paper.\"}",
"{\"title\": \"Digging deeper into Question 1\", \"comment\": \"Thank you for the comments. With the exception of Q1 they are a satisfactory response to my criticism.\\nNow regarding Q1 - it was meant as a somewhat deeper question than just \\\"what kind of score is considered good\\\". Let me try to have another go at it.\", \"step_1\": \"Imagine you've built a model that has a match of 1.0 according to BrainScore. Would you then conclude that the representation employed by this model is 100% identical to the representation that is employed by the brain? I would assume you would not, because the fact that it can perfectly predict brain responses from ANN activity does not yet mean that those systems have similar representation.\\n\\n(Let me note that I appreciate that this is not you, but rather on the creators of BrainScore. However using using BrainScore as your metric _is_ on you, which allows me to pick at you here :) )\", \"step_2\": \"Now let's hypothetically assume that scoring 1.0 on BrainScore corresponds to being 3% identical in terms of representations (a rather low number, I agree, but let me show where this thought experiment leads). CORnet-S achieves 0.747. Multiplied by 3% it means (using our hypothetical assumption) that CORnet-S's representation is 0.747 * 3% = 2.241% identical to brain's representation. And with 5% synaptic updates, as you show, we capture 80% of that, putting us at 1.7928% identical. Given all the noisiness and approximate nature of biological readings would you still say that the difference between 1.7928% and 2.241% has biological meaning?\", \"step_3\": \"A sentence like \\\"we find that only 2% of supervised updates (epochs and images) are needed to achieve ~80% of the match to adult ventral stream\\\" makes a reader believe that since you achieved 80% of score then those 2% capture \\\"a lot\\\" of similarity between ANNs and brains, while in reality it only means that it captures 80% of the _score_, but how much of the similarity between the ANNs and brains it is would fully depend on how biologically meaningful is the score itself.\", \"another_way_of_putting_it\": \"a reader should be made extremely aware that when you say \\\"80% match\\\" you don't mean \\\"80% match to the brain\\\", but \\\"80% match to the score\\\". If this is said explicitly and repeatedly the reader has a chance to understand that the actual significance of this result depends on how good the score is as a metric of similarity between ANNs and brains.\\n\\nA way out of this problem would be somehow quantify what does BrainScore=1.0 actually mean in terms of closeness of representations of the two systems, but that is a highly non-trivial task.\\n\\nEven a more drastic thought experiment would be if we assume (just as an experiment) that 1.0 BrainScore is achievable even with absolutely no similarity between representations of an ANN and the brain. In this case capturing 80% of the score would not tell anything about the match between representations...\\nThis remains my main concern.\"}",
"{\"title\": \"Initial Response 2/2\", \"comment\": \"Regarding the point about local learning: Indeed, we speculate about the possibility of local learning rules in the Discussion. We think this work here might enable such rules because Critical Training shows that training a subset of layers can be sufficient for reasonable accuracy and brain predictivity.\\n\\nWe agree that the statement \\\"synaptic updates primarily take place in higher cortical regions\\\" is unsupported. We will update the corresponding paragraph in the manuscript. We maintain the conclusion that the fact that early visual areas in the model converge earlier is in agreement with neurodevelopmental studies. Behaviors that rely on low-level spatial and temporal processing of visual inputs reach adult-like performance very early - 4 years for temporal vision and 6 years for spatial vision (Ellemberget al 1999). On the other hand, more complex visual behaviors that rely on higher cortical regions, such as face perception, only fully develop at an age of around 16 years old (Grill-Spectoret al 2008).\", \"re_numerical_imprecisions\": \"(i) Indeed, there was a plotting error. We will fix this and have verified the correctness of results with additional runs and statistical analyses that confirm a score of 54+/-1.5% for WC, n=10 seeds; and improvement over 43+/-1.7% for KN, permutation test p < 1e-5.\\n(ii) We only included Brain-Score benchmarks of one type (linear predictivity) in this study to keep results comparable across cortical regions. Brain-Score includes an additional IT benchmark measuring temporal correspondence of models which puts CORnet-S at the top overall. We will clarify this in the paper.\"}",
"{\"title\": \"Initial Response 1/2\", \"comment\": \"Thank you for your review and this great summary. To make the best constructive use of OpenReview, we wanted to send an initial response with the changes we are planning. Please let us know if those changes fully address your concerns with our study and if there are additional analyses that would be helpful to clarify any remaining questions.\", \"re_pros\": \"We agree with the point about constructing model taxonomies in future work. Applying weight clustering in this way on a range of model architectures would be interesting to determine model similarities.\\n\\nWe also share the intuition that WC+CT was expected to be most advantageous in a scarce data regime, that is exactly what we focused on in this work.\", \"re_cons\": \"The models are indeed not yet accurate models of development. The analogies used in our study are the framework (p. 2) in which we lay out what the models should, in our view, aspire towards. We do not argue that the changes we have proposed are fully biological models of post-natal development, only that they more concretely correspond to biology than current models. By changing the models to be more inline with biological post-natal development while still achieving adult states that are very brain-like, we hope that the new models (i.e. computational hypotheses with all parameters fixed) will become more serious models of visual development.\\n\\nWe see these concrete commitments of model stages to biological stages as essential so that we can concretely relate biological data with model predictions. They allow for a concrete tracking in time between model and biology from \\u201cbirth\\u201d (instantiated architecture) via \\u201cdevelopment\\u201d (experience-dependent training updates) to \\u201cadulthood\\u201d (inference and adult-level learning).\\n\\nFollowing this framework, many aspects of current models\\u2019 development are non-biological. This study tackles the number of weight updates with a combination of reduced training, improved initialization, and training a subset of layers. Related works tackle for instance biologically plausible variants of back-propagation (e.g. Lillicrap et al. 2016, Scellier et al. 2017) or very recently self-supervision (Konkle et al. 2020, Zhuang et al. 2020). We will discuss these in more detail and also refer to Pozzi et al. 2020 for a concrete implementation of RL in this context. We will also make our partial approach of improving -- but not yet accomplishing -- models as hypotheses of biological development more clear.\\n\\nWeight Compression (WC) improves over Frankle et al. 2019 by specifying the weights in a compressed distribution instead of having to specify for every single weight whether it needs to be trained or not. This compression is biologically necessary due to the information bottleneck in the genome (section 4) where detailed specifications such as in Frankle et al. 2019 are inconsistent with the limited capacity of the genome. Weight Compression is an existence proof that such improved distributions can be found and in this case improve from 43% to 54% (Fig. 2B, see below for an update on the results reported in this figure). We don\\u2019t think evolution found these distributions by compressing learned weights; rather we view WC as a way of showing that there are better initial distributions that could have been optimized during evolution and encoded with the genome with little information.\\n\\nThe critical layers in CT are those with the fewest weights to obtain the potential most minimal training. For comparison, training only the middle layer of the IT block would require training 38M of the 53M parameters (over 70%).\\n\\nWe agree with the comment that 5C is out of place and we will move this subpanel to Fig. 2, where we present the results of the WC. By showing these kernel cluster centers, we wanted to connect our work with analytic interpretation studies (sec. 7, last paragraph). WC is a constructive approach to validating kernel clusters that could come out of post-hoc analyses.\\n\\nCould you clarify which papers\\u2019 definitions you are referring to with regards to the \\u201csupervised updates\\u201d terminology? We would like to use canonical terms and apologize for not being aware of existing definitions.\", \"regarding_emergence_of_orientation_selectivity_in_the_primary_visual_areas\": \"In primates, not only orientation selectivity is present at birth, but the primary visual cortex already shows a topographical arrangement of orientation columns prior to visual experience (Wiesel and Hubel, 1974). The same is true for other mammalian species such as cat and ferret (for a review of the literature see Huberman, Feller and Chapman 2008).\"}",
"{\"title\": \"Initial Response\", \"comment\": \"Thank you for your review and this great summary that well highlights why we are also excited about this work. To make the best constructive use of OpenReview, we wanted to send an initial response with the changes we are planning. Please let us know if those changes fully address your concerns with our study and if there are additional analyses that would be helpful to clarify any remaining questions.\", \"regarding_the_update_of_only_down_sampling_layers\": \"We do not know of any dataset with which we could make more precise biological commitments. As mentioned in the review, at this point Critical Training is primarily a proof-of-principle that different cortical learning rates (even in the extreme of no updates) can lead to useful representations.\", \"regarding_the_weight_updates_metric\": \"We report both the number of supervised updates (whole-model, parallel updates) as well as the number of supervised synaptic updates (per synapse) because we view both as relevant. To quantify experience-dependent updates and in lieu of more precise biological measurements (e.g. estimating the energy expenditure per synapse), we agree that supervised updates should be measured. We will clarify this point in the paper.\\n\\nRegarding bits/synapse: The 37 bits/synapse are indeed a possible under-estimate because more bits might be required to specify different synaptic strengths. Our main argument with this estimate was that it is infeasible to precisely encode \\u201ca pre-trained weight matrix\\u201d in the genome, necessitating either the learning of most weights (a point often made by deep learning advocates) or compressed initialization (such as WC proposed in this study). \\n\\nPg. 5: Indeed, we will update this.\\n\\nFig. 3 middle layers: We only experimented with layers with the fewest weights to obtain the potential most minimal training. For comparison, training the middle layer of the IT block would require training 38M of the 53M parameters (over 70%).\\n\\nFig. 3 trained parameters: This is a result of the different training techniques: The most minimal point for Downstream Training freezes all parameters (see subpanel A, gray box, last row) whereas the most minimal point for Critical Training still updates a single layer per block (see subpanel A, cyan box, last row).\\n\\nFig. 4: We will make the suggested improvement.\\n\\nSection B.1: We used 4 components for V1.conv1. Note that the Gabor prior is not used beyond V1.conv1. We will add these details to the text.\\n\\nSection B.3: WC uses 4,166 parameters, Mixture 428,114, Kernel Normal 433,735, No Gabor prior 20,026 to initialize the weights. We will add this to the text.\", \"resnet_mapping\": \"To initialize ResNet from the WC clusters, we mapped the ResNet architecture (blocks 0 - 4 ) to the CORnet-S architecture (blocks V1, V2, V4, IT) as follows 0 \\u2192 V1, 1 \\u2192 V2, 2 \\u2192 V4, 3 \\u2192 V4, 4 \\u2192 IT. Since ResNet blocks have more layers but no recurrence, the CORnet-S layers are mapped repeatedly. Based on the mapping layers we initialized weights from the CORnet-S cluster centers.\"}",
"{\"title\": \"Initial Response 2/2\", \"comment\": \"### Sampling initial weights from an improved distribution\\nWe do not claim that weight compression is how evolution found the at-birth synaptic connections. In our study, we are simply using alternative models to explore the hypothesis that evolution may have discovered an initialization strategy that leads to higher behavioral performance and higher match to the ventral stream than current initialization distributions. Because of the genomic bottleneck, the initialization cannot specify all the developed weights, so there is a trade-off between capacity of information in the genome and at-birth goodness of neural representations. Our results reveal that with nearly identical capacity, an alternative initialization distribution (relative to that the one typically used) leads to networks that are more brain-like in their adult state. Our study\\u2019s contribution is not to prove that the brain uses such a strategy to set up its initial synaptic distribution, but to reveal a new space of possibilities (hypotheses) that should be considered -- the hallmark of any good modeling study. We will clarify this in the paper. \\n\\n### Figure 1 presents models from different developmental trajectories\\nEach dot (model with a certain training) in Figure 1 is a different hypothesis of how the ventral visual stream might have developed. Each dot corresponds to a model architecture trained for a certain amount of time (epochs and labeled images) and the adult brain-likeness that is achieved by that model. To the best of our knowledge, this is the first work that presents a multitude of neural and behavioral scores over these different models and shows that: early visual representations of some models are very adult-like (i.e. matched to the brain data) with very little supervised experience, and that high-level visual representations of all models currently require more supervised experience to achieve comparable levels of adult brain match. We plan to update the description of this figure to make this more clear in the text.\\n\\n### Tests for significance\\nFor Figure 2, we have updated the comparison between Kaiming Normal (KN) and Weight Compression (WC) with a larger number of seeds and performed statistical tests to show that our method significantly improves brain predictivity (43+/-1.7% vs 54+/-1.5% for KN and WC relative brain predictivity respectively, n=10 seeds; permutation test p < 1e-5). Due to the amount of different models trained, we only have one seed per model in Figure 3 but the differences in magnitude and the consistency of results leave no doubt for the improvements of the Critical Training (CT) over the Downstream Training.\"}",
"{\"title\": \"Initial Response 1/2\", \"comment\": \"Thank you for your review. To make the best constructive use of OpenReview, we wanted to send an initial response with the changes we are planning.\\nThe central argument of this review is that current artificial neural networks are not good models of visual system development -- a point that we completely agree with and is the motivation of the explorations of different types of models that we undertook to test in this paper. The main specific criticism seems to be that we did not justify the biological assumptions that underpin our choices of the specific models that we chose to test. We agree that our choices should be better explained and justified and we plan to make those changes as outlined below. However, we hope that the reviewer will agree that, because no other image-computable models are even close to explaining the adult ventral visual stream, exploration of the minimal assumptions needed to achieve models that are at least as brain-explanatory as the current deep CNN models is a contribution to the field that might deserve visibility in ICLR. \\n\\n### No current ANN is a complete and accurate model of the (development of the) primate visual system\\nWe absolutely agree that no current artificial neural network (ANN) is a complete model of the primate visual system and its development \\u2014 that is precisely our motivation for trying to find alternative ANNs that are more brain-like. In this study, we focus on model changes that might be more inline with biological post-natal visual development. Current deep ANN models of the visual system are criticized for being non-biological due to their reliance on an excessive amount of supervised weight updates (e.g. Grossberg 1988, 2020, Marcus 2004). While other studies have focused on other aspects such as investigating biologically plausible implementations of supervised learning (e.g. Lillicrap et al. 2016, Scellier et al. 2017) or very recently self-supervision (Konkle et al. 2020, Zhuang et al. 2020), we here studied the amount of training (both in labeled images and in synaptic updates) required to achieve adult states that are still very brain like and propose initialization and training procedures that reduce the amount of training required. We do not argue that the changes we have proposed are fully biological models of post-natal development, only that they are more biological than the current models. The idea is that solving the entire development problem all at once is too much for one study, but that even partial improvements in this direction will be informative to further work. We will make the framing of this partial approach clear in the updated paper version. \\n\\n### ANNs as computational hypotheses of primate object recognition\\nWe disagree with R1 in the sense that, in our view, fully-trained ANN models are not just tools, but they are computational hypotheses (i.e. approximate models) of brain processing. Specifically, each such model (all parameters fixed) can be aligned and tested at all levels of the adult ventral visual stream and any failed predictions of neural responses at any level falsifies that model and can be used to guide the building of new, improved models. To date, certain, fully trained, ANN models are the most accurate hypotheses of ventral stream processing. Here, by exploring alternative, more biologically-plausible ways of discovering such ANN models, we hope that the new models will become more serious models of visual development. The first test of any such model is its match to the adult visual data, so our approach pivots off of that measure. Whether more biologically plausible models of development will lead to benefits for Machine Learning is an open research question that we do not engage with in our study. But existing studies already suggest that closer modeling of biology will have additional benefits, e.g. increased generalization (Kubilius et al. 2019) and robustness (Dapello et al. 2020). The \\u201cworst\\u201d outcome of such modeling efforts is a better model of biological development which, in our view, is an exciting research direction in itself. \\nThese benefits are of course still mostly in the future and our study does not solve them all. But we are taking first steps towards them by showing that the number of supervised synaptic updates in brain models can be more closely aligned with biological development without a severe decrease in match of the \\u201cadult\\u201d model to the adult brain.\"}",
"{\"title\": \"Initial Response\", \"comment\": \"Thank you for your review. To make the best constructive use of OpenReview, we wanted to send an initial response with the changes we are planning. Please let us know if those changes fully address your concerns with our study and if there are additional analyses that would be helpful to clarify any remaining question.\\n\\n(1) The standard-trained CORnet-S achieves a score of 0.42 relative to an estimated ceiling (second-to-last line on page 3). For comparison, a pixel baseline only achieves 0.03, i.e. 7% on the normalized plots in the paper. We have this baseline in Figure 2 and Figure 4 but will also add it to Figure 1 for reference.\\n\\n(2+3) This is a great question and as far as we know the exact number of synaptic updates is unknown. There is an upper limit based on the update rate of long-term synaptic plasticity. Another upper limit we use in the paper is that humans saccade only 2-3 times per second, so the number of new images we receive is limited. In some sense, we are trying to motivate more accurate biological estimates so that models can be falsified -- currently, such estimates are rather vague giving a lot of leeway to models. We will make these upper limits more clear in the introduction. We will also remove the quote about children following your comment because we agree that self-supervision could serve to obtain labels for each saccade.\\n\\n(4) We plot standard deviation respectively over multiple runs in Figure 2B. We have run more seeds to confirm statistical differences following Reviewer 1\\u2019s remark showing that Weight Compression scores significantly higher than vanilla Kaiming Normal (43+/-1.7% for KN vs 54+/-1.5% for WC, n=10 seeds; permutation test p < 1e-5).\\n\\n(5) The primary \\u201czero\\u201d baseline we are using are the pixel values (pixels in Figure 2 and 4) which achieve only 7% of the standard-trained model. Therefore, the 54% achieved by WC without training far improves over this baseline.\\n\\n(6) Vanilla Kaiming Normal initialization achieves 43% (Figure 2B), we will add this to the text.\", \"regarding_the_use_of_the_brain_score_suite_of_benchmarks\": [\"Brain-like by definition is to match and predict observed data (be it neural, behavioral, or anatomical).\", \"To the best of our knowledge, Brain-Score is the gold standard for comparing models of the ventral stream on an integrative set of neural and behavioral benchmarks across the regions V1, V2, V4, and IT that support core object recognition behavior.\", \"Many analyses to confirm the validity of the benchmarks have been reported in previous papers:\", \"Different architectural choices lead to considerable spread in the scores (https://www.biorxiv.org/content/10.1101/407007v2, Fig. 1)\", \"Model scores generalize to new images and new primates/recordings (https://papers.nips.cc/paper/9441-brain-like-object-recognition-with-high-performing-shallow-recurrent-anns, Fig. 2)\", \"Models with a better V1 match also better match behavior in terms of adversarial robustness (https://papers.nips.cc/paper/2020/hash/98b17f068d5d9b7668e19fb8ae470841-Abstract.html)\", \"Figure 1 in our study also addresses one version of an ablation experiment: reducing the model\\u2019s developmental process leads to a considerable spread in the scores (reducing them as far as 20% of the original model\\u2019s score), but perhaps also not as severe of an immediate reduction with fewer updates as many in the field would have thought.\", \"In Figure 5A, we also made sure that the distributions found for CORnet-S through WC and the critical training (CT) generalizes to Resnet50 and MobileNet architectures. (Reviewer 3 makes an interesting remark about using these techniques to create model taxonomies)\", \"Please let us know if there is a particular additional analysis that you had in mind and we would be happy to run it.\"]}",
"{\"title\": \"How much can we rely on BrainScore's metric in studies like this one?\", \"review\": \"Summary\\n-------\\nThe paper is about ANN being best-known models of developed primate visual systems. However this fact does not yet mean that the way those systems are trained is also similar. This distinction and a step towards answering this question is the main motivation of this work. The authors demonstrate a set of ideas that while drastically reducing the number of updates maintain high Brain Predictability according to the BrainScore. The significance of this result in my opinion largely depends on how well we can map those observations and methods to biological meaning and knowledge on how primate brains are trained (see the discussion point below).\\n\\n\\nCritique, Questions, Discussion\\n-------------------------------\\n(1) How good the \\\"match\\\" between the brain and DCNN is in the first place? For example, if we measure the match in terms of correlation (between responses, or predictions, any metric would work in the context of this question), then 80% of corr=1.0 would be very impressive and significant, while 80% of corr=0.2 (being corr=0.16) could well fall under the noise and while being significant numerically, does not give us the opportunity to say that we have captured 80% of the match between the artificial system and the ventral stream (because what we have actually captured is 80% of corr=0.2, which might as well be almost nothing).\\n\\n(2) \\\"squirrels to jump from tree to tree within months of birth\\\", \\\"macaques to exhibit adult-like visual representations after months\\\" -- hoe many synaptic updates happen during those months? Do we know? Maybe it is also in trillions? In which case this portion of the argument would fall apart. Emphasis on \\\"supervised\\\" would probably still survive.\\n\\n(3) \\\"a child would need to ask one question every second of her life to receive a comparable volume of labeled data\\\" -- are they not? I would say children get even more data if by \\\"question\\\" we will mean not only verbal questions and answers, but also answers that are tactile (\\\"how this will feel on touch?\\\"), auditory (\\\"what does this object sound like\\\"), visual prediction (\\\"will this thing now move to the right or to the left?\\\"), etc. Seen like this I would say that children receive tons of supervised data and \\\"one per second\\\" is an underestimation.\\n\\n(4) How does the \\\"match\\\" vary depending on random initialization? Is it consistently 54% or is there a substantial +/-?\\n\\n(5) How do we know the \\\"true zero\\\" in terms of the \\\"match\\\"? What would be a model (function? maybe a constant function?) that clearly has zero \\\"match\\\"? If we now take this function and run it through your pipeline to get the match%, would the result be indeed 0% or something else? Maybe 54% is the \\\"true zero\\\" and not 0%.\\n\\n(6) Why sampling from CORnet-S-based clusters of parameters is a good way of modeling \\\"at-birth\\\" situation? Compared to 54% achieved with this methods, what would be the match% if the network would initialized with vanilla Kaiming Normal? Uniform?\\n\\n\\nRecommendation and justification\\n--------------------------------\\nMy main concern is with the interpretation of the meaning of this work. BrainScore's metric is a very approximate proxy that weakly reflects the match between models of vision. In this work, however, this metric is taken as a \\\"gold standard\\\" and it is assumed that achieving, for example 50% of BrainScore of 0.42 is something biologically meaningful. An ablation experiment that would demonstrate that achieving these 50% (or other numbers presented in the paper) is a non-trivial event which can only happen if the model is indeed becoming more \\\"brain-like\\\" would go a long way in making the case of this work strong. I suspect, however, that such an ablation study will show that there are ways to achieve high% of BrainScore using models that are completely dissimilar to the brain. I currently evaluate this submission as borderline, and am looking forward to authors' views on the concerns I have outlined above: do these indeed matter and affect the claims of this work (and how should we see them if that's the case), or are these concern largely irrelevant (and why we can ignore them if that's the case?).\\n\\n\\nAdditional remarks\\n------------------\", \"arguably_missing_references_on_modeling_of_the_ventral_stream_with_anns\": \"https://www.nature.com/articles/s42003-018-0110-y, https://www.jneurosci.org/content/35/27/10005\\n\\n\\nUPDATE - Nov 30\\n-----------------------\\nAfter looking at the revised version of the manuscript I am still concerned that the claims made in the abstract (and implied in the main text of the paper) about the match of ANNs to the brain are misleading the reader into assigning greater biological significance to the reported result than it actually holds. While the authors made slight modifications in the text and added a few sentences commenting on the issue, these changes did not constitute a change would make the reader \\\"extremely aware that when you say \\\"80% match\\\" you don't mean \\\"80% match to the brain\\\", but \\\"80% match to the score\\\"\\\". I find that a softer claim that would explicitly acknowledge that 5% of \\\"synaptic\\\" updates explain 80% of the predictivity score and not 80% of the match to the brain would make this work more scientifically precise and thus more valuable. I am keeping my original assessment of this paper as being borderline.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Interesting yet unconvincing ideas about modeling the primate visual system with DNNs\", \"review\": \"The study starts from the fact that DNNs have been around and popular for a while for modeling the visual system, but that they are not realistic because they are trained via supervised learning approaches with a very large number of parameters and that this is not a feasible model of the development in the visual system.\\n\\nIn general, although the manuscript presents some interesting ideas, it makes many assumptions without providing clear bases for these assumptions (e.g. compressing the weights of a pretrained network to sample new weights is posed as a realistic approximation of the infant visual brain) and lacks a theoretical foundation for the claims and experiments that are presented. The authors acknowledge that this study is intended as a proof of principle, but given the arbitrary nature of the choices made, I do not see the added significant value of the results.\\n\\nWhile DNNs are indeed commonly used as models of the primate visual system, in my view, the current study is addressing a somewhat inconsequential problem. This is because to the best of my knowledge, no neuroscientist is claiming that a deep neural network is a complete and accurate model of the (development of the) primate visual system. Furthermore, it is well-known and acknowledged that deep neural networks are not biologically plausible models of (how learning occurs in) the brain. They are currently one of the best computational tools to use to study the sensory (and especially the visual) nervous systems, and that is all that they are. It is not clearly explained why it is necessary to claim that the learning in these models and the development of the brain has to be similar for them to be good models of vision. Of course, we should thrive for better and more accurate models of the brain, but in my view the current study does not serve to this goal.\\n\\nIn section 4 authors describe an initialization protocol for the network weights which involve compressing a trained model\\u2019s weights into clusters and then sampling from these clusters. What is not clear is why the authors assume that this can be a valid model of the infant visual system. At this point their approach sounds like arbitrarily selecting a set of criteria to make the networks perform worse than fully trained networks, and then training them. I could be missing something, but I do not see the relevance or necessity of an approach such as the presented one. A main concern is that no theoretical basis has been established in the paper besides some superficial ideas. For instance, why would an infant brain be made up of a DNN with connections whose weights are initialized with the method authors came up with?\\n\\nMuch of the methodological details are only included in the appendix. I found it rather odd to not find any information about, for example, the proposed weight initialization method in the paper.\\n\\nIt is not clear to me what is presented in Figure 1 and why. Why are the authors showing how models from another paper trains?\\n\\nAnother concern is that nowhere in the results seems to be a test for significance. The improvements of the results could be a coincidence, since the results are heavily dependent on one experiment.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Surprising reduction in number of weight updates needed to achieve a good Brain-Score\", \"review\": \"Summarize what the paper claims to contribute.\\nPrevious work developed CORnet-S, a biologically inspired network that leads the Brain-Score benchmark of similarity with the primate ventral stream. A limitation of CORnet-S and other deep networks with high Brain-Scores is that they require many more weight updates than seem biologically feasible. In this paper, the number of weight updates used to train CORnet-S is reduced by two order of magnitude, while retaining a fairly high Brain-Score. This is done by combining three approaches, including reduced training, initialization of weights using compact distributions that describe trained weights, and updating only a minority of layers. \\n\\nList strong and weak points of the paper.\", \"strong_points\": [\"The paper addresses an important problem that has not been given much attention previously\", \"The work builds on the state-of-the-art model in this domain\", \"The three approaches to reducing updates are complementary and interesting in different ways; the second and third thought-provoking with respect to their biological relevance\", \"The experiments and analysis are thorough\", \"The paper is well written\", \"The context of the work is clearly described and well referenced\"], \"weak_points\": [\"I wasn\\u2019t able to discern any substantial weaknesses.\", \"Clearly state your recommendation (accept or reject) with one or two key reasons for this choice.\", \"I recommend acceptance. The number of updates needed to learn realistic brain-like representations is a fair criticism of current models, and this paper demonstrates that this number can be greatly reduced, with moderate reduction in Brain-Score. I was surprised that it worked so well.\", \"Ask questions you would like answered by the authors to help you clarify your understanding of the paper and provide the additional evidence you need to be confident in your assessment.\", \"Is the third method (updating only down-sampling layers) meant to be biologically relevant? If so, can anything more specific be said about this, other than that different cortical layers learn at different rates?\", \"Given that the brain does everything in parallel, why is the number of weight updates a better metric than the number of network updates?\", \"Provide additional feedback with the aim to improve the paper.\", \"Bottom of pg. 4: I think 37 bits / synapse (Zador, 2019) relates to specification of the target neuron rather than specification of the connection weight. So I\\u2019m not sure its obvious how this relates to the weight compression scheme. The target neurons are already fully specified in CORnet-S.\", \"Pg. 5: \\u201cThe training time reduction is less drastic than the parameter reduction because most gradients are still computed for early down-sampling layers (Discussion).\\u201d This seems not to have been revisited in the Discussion (which is fine, just delete \\u201cDiscussion\\u201d).\", \"Fig. 3: Did you experiment with just training the middle Conv layers (as opposed to upsample or downsample layers)?\", \"Fig. 3: Why go to 0 trained parameters for downstream training, but minimum ~1M trained parameters for CT?\", \"Fig. 4: On the color bar, presumably one of the labels should say \\u201cworse\\u201d.\", \"Section B.1: How many Gaussian components were used, or how many parameters total? Or if different for each layer, what was the maximum across all layers?\", \"Section B.3: I wasn\\u2019t clear on the numbers of parameters used in each approach.\", \"D.1: How were CORnet-S clusters mapped to ResNet blocks? I thought different clusters were used in each layer. If not, maybe this could be highlighted in Section 4.\"], \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Review for \\\"Wiring Up Vision: Minimizing Supervised Synaptic Updates Needed to Produce a Primate Ventral Stream\\\"\", \"review\": \"This paper presents an empirical study that elucidates potential mechanisms through which models of adult-like visual streams can \\\"develop\\\" from less specific/coarser model instantiations. In particular, the authors consider existing ventral stream models whose internal representations and behavior are most brain-like (amongst several other models) and probe how these fair in impoverished regimes of available labeled data and model plasticity (number of \\\"trainable\\\" synapses). They introduce a novel weight initialization mechanism, Weight Compression (WC), that allows their models to retain good performance even at the beginning of training, before any synaptic update. They also explore a particular methodology for fine-tuning, Critical Training (CT), that selectively updates parameters that seem to yield the most benefit. Finally, they explore these methods/algorithms' transfer performance from one ventral stream model (CORnet-S) to two additional models (ResNet-50 and MobileNet).\", \"pros\": \"The problem that the authors present is an interesting one and undoubtedly useful for many applications. Deep neural networks such as the CORnet-S, ResNet-50, and MobileNet are data-hungry, and obtaining labeled data is an expensive process (and perhaps even implausible in many cases). Techniques to condense these models in terms of parameters and alleviate the need for vast amounts of labeled data while maintaining desirable traits (such as brain-like representations) are important for the machine learning community. Though a bit far-fetched at this point, tracking the developmental trajectories of these neural networks can also have other scientific implications in the form of data-driven hypothesis testing.\\n\\nThe most exciting part of the study is the transfer experiment (from CORnet-S to ResNet and MobileNet). This seems like an interesting and novel way to construct model taxonomies. For instance, sampling from the CORnet-S weight clusters works well for ResNets potentially because these two models can be construed as \\\"recurrent\\\" in a way. MobileNets, on the other hand, are purely feedforward and thus are not significantly influenced by knowledge from the CORnet-S weights.\\n\\nMoreover, the authors conduct a series of numerical experiments to identify \\\"when\\\" their proposed methods are most useful. The finding that WC+CT is more advantageous in regimes where data is scarce (as opposed to regimes where data is plenty) is not surprising but a good one to report. I say \\\"not surprising\\\" because WT distills knowledge from a fully trained model, and CT only updates a fraction of the parameters (updating more parameters would require more data to prevent overfitting).\", \"cons\": \"The authors take the analogy between \\\"a developing visual system\\\" and \\\"training a model\\\" a bit too far. They operate under the premise that visual circuitry develops purely via \\\"supervised\\\" learning. Is there conclusive evidence for this? It is also surprising that discussions of reinforcement learning mechanisms never feature, given that these are more biologically plausible.\\n\\nThe novelty (and utility; for ex: Fig 2b) of the proposed initialization technique is marginal. It is not articulated how their method (WC) overcomes the critiques they raise against Frankle et al. 2019. Moreover, claiming that WC achieves decent performance with \\\"zero\\\" synaptic updates is not fair. This seems to be closer to restoring pre-trained weights than to random initialization (like KN). \\n\\nFor CT, the authors choose \\\"critical\\\" layers to update. Is there a rationale (or a statistical metric) that justifies choosing these specific layers? \\n\\nThe WC kernel cluster center visualization analysis (Fig. 5c) seems out of place and poorly discussed. What can be gleaned from the 3x3 kernels shown here?\", \"minor\": \"By \\\"supervised updates,\\\" the authors refer to the number of available labels and not the number of parameter updates that happen. This terminology is non-canonical.\", \"employing_gabor_priors_for_the_first_convolutional_layer\": \"Doesn't orientation selectivity emerge in the primary visual areas from experience, rather than structurally hard-coded?\\n\\nThe authors allude to the possibility of using \\\"local\\\" learning rules on a subset of layers identified by CT. However, this is speculation from the point of view of the current manuscript. All the conclusions drawn are from \\\"global\\\" gradients.\\n\\nAmbiguous sentence (Pg. 6, Sec 6): \\\"Reducing the number of supervised updates minimizes required updates by a smaller number of epochs and images.\\\"\\n\\n(Pg. 8) \\\"synaptic updates primarily take place in higher cortical regions\\\": Is there evidence for this?\", \"numerical_imprecisions\": \"(i) The authors claim that the performance of CORnet-S_wc is 54% (relative to the fully trained model). However, in Fig 2b (mean) and Fig 3c (top) the markings seem to be closer to 50%?\\n(ii) (Fig. 4a) The performance of MobileNet seems to be slightly better than CORnet-S, which contradicts the initial claim that CORnet-S is currently the best available model of adult primate visual processing.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
TuK6agbdt27 | Learning Associative Inference Using Fast Weight Memory | [
"Imanol Schlag",
"Tsendsuren Munkhdalai",
"Jürgen Schmidhuber"
] | Humans can quickly associate stimuli to solve problems in novel contexts. Our novel neural network model learns state representations of facts that can be composed to perform such associative inference. To this end, we augment the LSTM model with an associative memory, dubbed \textit{Fast Weight Memory} (FWM). Through differentiable operations at every step of a given input sequence, the LSTM \textit{updates and maintains} compositional associations stored in the rapidly changing FWM weights. Our model is trained end-to-end by gradient descent and yields excellent performance on compositional language reasoning problems, meta-reinforcement-learning for POMDPs, and small-scale word-level language modelling. | [
"memory-augmented neural networks",
"tensor product",
"fast weights"
] | Accept (Poster) | https://openreview.net/pdf?id=TuK6agbdt27 | https://openreview.net/forum?id=TuK6agbdt27 | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"mjlR7RXOcYL",
"eFYh-ZTa2dQ",
"inDz_5sIsrX",
"F07oX_-lAsw",
"VrJX6Si8_32",
"Skx2hCGzd9n",
"OgZfwmTixo",
"g1phw767Gb",
"fP3eXGLCQo5",
"KBrk68bhvck",
"paOWDCZfttG",
"LBfmwKZemZm",
"nydafL1iHL0"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040440024,
1606257805297,
1606203905510,
1606176632674,
1605286995459,
1605286663460,
1605286479003,
1605286305334,
1605285643228,
1604286058160,
1603865170861,
1603853023547,
1603737228612
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3467/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3467/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3467/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3467/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3467/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3467/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3467/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3467/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3467/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3467/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3467/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3467/AnonReviewer4"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Poster)\", \"comment\": \"This paper proposed a way to combine LSTMs with Fast weights for associative inference.\\n\\nWhile reviewers had concerns about comparison with Ba et al., and experimental results, the authors addressed all the concerns and convinced the reviewers. The revision strengthened the paper significantly. I recommend an accept.\"}",
"{\"title\": \"Post revision comments\", \"comment\": \"Thank you for answering my questions. I have read the revised paper, and I think it looks better now. It\\u2019s nice to see that the numbers in Table 2 improved after tuning the hyperparameters. I have updated my score.\"}",
"{\"title\": \"Thanks\", \"comment\": \"Thank you, AnonReviewer4, for taking the time to reassess our submission.\"}",
"{\"title\": \"New Experimental results strengthen the claims made\", \"comment\": \"Thanks for your thoughtful rebuttal, I think this clarifies a number of things for me and the new experimental result strengthen the argument. I still believe an ablation or meta-learning baseline would be a useful comparison for the meta-learning experiments. However, there are many improvements in the latest draft which address my greatest concerns. These include your points about the baselines, clearer figures, clarification of improvements over earlier fast weight models, the clarity around the architecture and including the location and content based keys and, last but not least, the bug fix which now clearly demonstrates an improvement over the chosen baselines, I'm happy to increase my score for this paper to a six.\"}",
"{\"title\": \"Response to AnonReviewer4\", \"comment\": \"Thank you for your time and thoughtful critique. In the main response to the reviews, we list a few limitations of the fast weights architecture by Ba et al. (JBFW) as well as some other important changes. Please have a look. We have run experiments on Ba et al. on catbAbI and were unable to find hyperparameters that give any reasonable results. To the best of our knowledge, the JBFW model has never been shown to work well beyond the toy problems that were considered in the original paper. In a workshop paper at NeurIPS 2017 by Schlag et al (see paper references). JBFW are significantly outperformed on a basic associative retrieval task which is similar to the simplest bAbI tasks (task1). We believe those are all good indicators that the differences in the details matter here.\\n\\nThe tensor product in the input pattern of the memory creates a bona fide representational space for the keys learned by the LSTM. This factorisation of the input pattern likely increases the LSTMs representational ability beyond its training distribution. If the LSTM e.g. learns to factorise the key patterns into entities for $k_1$ and locations for $k_2$, then the tensor product $k_1 \\\\otimes k_2$ guarantees a unique vector representation for _any_ key and location pairs which is a requirement for it to be a useful input pattern (minimises interference). To repeat for others: in the extreme, if all keys are orthogonal in their key-space and all locations are orthogonal in their location-space, then all compositions will be orthogonal to each other too. This also includes samples which are technically out-of-distribution and all that is required is that the LSTM can extract the factor representations independently. It is easier to learn such independent functions because there are many samples which share the same factors which can guide the learning of such a function whereas certain pairs are much more rare or even nonexistent. This argument has been introduced by Schlag et. al. (2018) so we only briefly mention it.\", \"as_we_have_mentioned_in_our_general_response\": \"we had a bug in our code and fixing this resulted in a performance increase across all models. The FWM is now convincingly beating all other models with a lead of 7.9% averaged over 3 seeds. In comparison with the TXL, our FWM is not only smaller but also requires many fewer activations to be kept in memory (see table 1).\\n\\nWe are not familiar with the Transformer architecture applied to RL. Do you refer to \\\"Stabilizing Transformers for Reinforcement Learning\\\" by Parisotto et al. (ICML 2019)? There does not seem to an official public code repository for this work. \\n\\nWe think that due to the simplicity of the linear hetero-associative memory in this network, the sign of the dot product between the write patterns and the read keys doesn't matter as long as it is always the same for a certain pattern. What is more important is the absolute value of the dot product.\\n\\nThank you for mentioning the work of Ritter et al. (2020). This looks interesting indeed but is also missing an official implementation. It appears to us that it would require a disproportional amount of work to incorporate it as a baseline. Our RL experiments are mostly for demonstrating the versatility of our method. We hope that future work in RL will evaluate different memory augmented models in more complex environments.\\n\\nThanks for pointing out the typo! In the latest version of the submission, we have made significant improvements to the text and the figures. \\n\\nWith the latest improvements in our work, we believe that we have addressed the main issues raised in your review and we encourage you to reevaluate your decision. Please let us know if you have any further questions or comments.\"}",
"{\"title\": \"Response to AnonReviewer1\", \"comment\": \"Thank you for your detailed review. Please have a look at the general response to the reviews as there have been some important changes that are relevant to your review.\\n\\nWe added two ablation experiments to the appendix which demonstrates how the FWM drops in accuracy if the key vectors are concatenated. While it is a notable drop in performance, it still outperforms the Transformer-XL. It is true that the presented FWM scales cubic but it is independent of its sequence length! The Transformer-XL, or any Transformer, stores all keys and values of previous steps that are within its context window which often comes with a large memory requirement. Compare e.g. the number of activations in table 1 between the TXL and the FWM!\", \"q1\": \"It does, because the model needs to make sure it is not mixing facts from previous stories with the facts of the ongoing story. This means that the model needs to learn to update its memory accordingly. Regular bAbI, on the other hand, is often simplified to \\\"sequence classification\\\". In the appendix section A, we discuss this at length. Please have a look if you have not seen it yet.\", \"q2\": \"We directly compare with MNM in our work. Our codebase is focused on catbAbI but we'll check if we can add those results in the near future.\", \"q3\": \"We have updated the caption of that figure to hopefully better explain the visualisation. The colour represents the dot product of the write keys of previous steps $k_1 \\\\otimes k_2$ and the query $n \\\\otimes e$ at the \\\"?\\\" position at which the answer has to be predicted.\\n\\nThank you very much for pointing out typos! We have fixed those (and several others) throughout our text. We hope that the latest changes have increased your confidence in our work.\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"Thank you for your review. Your summary is correct but please have a look at our recent changes in our general response to the reviews. Here we'll directly respond to your individual questions and comments.\", \"q1\": \"The FWM is initialised with zeros. We did not experiment with training the initial fast weights.\", \"q2\": \"$N_r$ is a hyperparameter and was selected based on the number of inference steps that is probably necessary to solve all tasks in bAbI. We have added an ablation experiment to the appendix (section E figure 7) which shows a drop in performance with fewer read operations.\", \"q3\": \"The vocab size of catbAbI is 200 and the word-embeddings are learned.\", \"q4\": \"This was a typo. We have now PTB and WT2 experiments with the appropriate hyperparameter tuning and we now beat both baselines. We also added figure 7 which gives an example of how the FWM improves over the AWD-LSTM on PTB.\", \"q5\": \"We have rewritten the caption of figure 2 to be easier to understand. We hope this new version clears up any remaining confusion.\\n\\nWe have fixed the typos you found (and several others) and improved the quality of various figures (including figure 3). Thank you for your positive feedback.\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"Thank you AnonReviewer2 for your comments. We have created a general response to the reviews, please have a look if you have not seen it yet. Here we'll address your more individual comments. We have also added an ablation experiment to the appendix and improved the text of the subsections about equation 3.\\n\\nThe tensor product in the input pattern of the memory creates a bona fide representational space for the keys learned by the LSTM. This factorisation of the input pattern likely increases the LSTMs representational ability beyond its training distribution. If the LSTM e.g. learns to factorise the key patterns into entities for $k_1$ and locations for $k_2$, then the tensor product $k_1 \\\\otimes k_2$ guarantees a unique vector representation for _any_ key and location pairs which is a requirement for it to be a useful input pattern (minimises interference). To repeat for others: in the extreme, if all keys are orthogonal in their key-space and all locations are orthogonal in their location-space, then all compositions will be orthogonal to each other too. This also includes samples which are technically out-of-distribution and all that is required is that the LSTM can extract the factor representations independently. It is easier to learn such independent functions because there are many samples which share the same factors which can guide the learning of such a function whereas certain pairs are much more rare or even nonexistent. This argument has been introduced by Schlag et. al. (2018) so we only briefly mention it.\\n\\nWe acknowledge in our work that the associative memory used is a rather simple one. Though we'd like to point out that it has to be constructed, edited, and read with representations generated by an LSTM. This is an added layer of complexity. Other work on associative networks is usually about storing a fixed set of patterns. In our opinion, extending it to modern Hopfield networks is not as trivial as adding a steep non-linear activation function. This is because we superimpose previous keys and values in the fast weight matrix. The Eq. 10 in Krotov and Hopfield (2016) requires access to every previous keys and values to compute the mixing coefficient. Replacing the query with the keys weighted by that mixing coefficient is the iterative mechanism which allows it to converge to one of the keys (the fixpoints). Using a non-linearity like the softmax then results in an attention mechanism which is in its essence equivalent to the Transformer attention but which would also grow with the length of the sequence (or the number of samples to store). The FWM instead accumulates all previous updates into its Fast Weight tensor.\\n\\nIt is correct that the auto-associative type can be converted into a hetero-associative type if it's input and output domain is subdivided and properly trained/constructed. However, our memory is controlled by the LSTM and no such constraints are explicitly applied. We believe that the explicit separation of the domain and codomain is a useful bias for the problems considered in this work. \\n\\nWe have added the missing reference, fixed the typos, and edited the figure description that you mention. Thank you for those details!\", \"q1\": \"The $N_r$ steps are not thought of minimising the energy landscape of the memory but are instead $N_r$ independent queries. This is e.g. demonstrated in figure 3 where we can see how the query-pattern matches the write-pattern at different previous steps. The query is a single step and no convergence analysis can be given because we do not have access to each key separately at time step t.\", \"q2\": \"Yes, this was a mistake in the text. However, we added new results based on our hyperparameter tuning which has the FWM now beat both baselines.\\n\\nWe hope that we have addressed your questions and comments adequately. Please let us know if you have any further questions or comments.\"}",
"{\"title\": \"Rebuttal: General Response\", \"comment\": \"We'd like to thank the reviewers for their helpful feedback! Since our initial submission, there have been a few big and several small changes. We think that the newest version of the manuscript is a massive improvement in outcome and overall quality and we hope the reviewers will find the time to appreciate the change.\\n\\nThese are the major differences we'd like reviewers to be aware of:\\n\\n1.) We have found a bug in our catbAbI code related to how states are carried between epochs. After fixing this issue, we reran the hyperparameter search for all our models. We see improvements in all models with **our FWM now at an average of 96.75% test accuracy** (see table 1)! The learning curves of the best seed for each model further visualises the gap between the FWM and our other models (figure 2).\\n\\n2.) We added WT2 to the language modelling results and our own Transformer-XL baseline. After tuning the hyperparameters **we now beat the AWD-LSTM and the AWD-Transformer-XL on PTB and WT2** (see table 2).\\n\\n3.) Reviewer 1 and 4 have both scrutinised the connection with the Fast Weight RNN by Ba et al. (JBFW). Notice, that we did mention that we ran JBFW models but decided to exclude it since we were not able to find any hyperparameters that converge. To the best of our knowledge, the JBFW model has never shown to work on any shared benchmark (like e.g. bAbI or language modelling). We don't find this surprising as it has some obvious technical issues like e.g.:\\n- It is essentially just a classic Elman RNN and likely suffers from vanishing gradients when applied to long sequences.\\n- It only adds to its fast weights. It is difficult to understand how previous information is removed or updated. \\n- It has a fixed fast weight decay mechanism, making it impossible (by design) to store information for many steps. \\n- Its memory is updated with the outer product $h_t \\\\otimes h_t$ which, as in a Hopfield network, allows to retrieve $h_t$ from a noisy version of itself ~$h_t$. It is, in theory, possible to convert Hopfield networks to the hetero-associative type, but we believe explicitly constructing hetero-associative memories is in practice much easier to learn.\\n\\n4.) We rearranged and improved the subsection on the writing and reading mechanism which now more intuitively explains our update rule. We also added a proof to the appendix which derives the update rule and rewrote the description of figure 3 regarding the visualisation of how the FWM chains independent facts to be easier to understand.\\n\\n5.) We have added two ablation experiments. One w.r.t. to $N_r$ (the number of read operations) which results in a performance decrease (see figure 7 in appendix E) and a concatenation of the key vectors which also results in a performance drop (figure 8 in appendix E). We now refer to those ablations in the discussion section.\\n\\n6.) We have improved the quality of the document by fixing typos, editing various sentences, and improving the presentation of most figures.\\n\\nFurther details can be found in the responses to each reviewer.\"}",
"{\"title\": \"The work proposes to complement an LSTM with an additional associative memory model with fast changing weights. The proposed combination demonstrates good results on several ML tasks.\", \"review\": \"This looks like an interesting paper with an original proposal. The empirical results on synthetic tasks are also good. The main problem that I am having is with the proposed network, specifically equation 3. I do not see why it makes sense to consider an outer product of $n$ and $e$ as an argument to FWM. As is mentioned in the paper (in appendix A) it would make more sense (both conceptually and from the perspective of complexity) to concatenate those two inputs, or even consider two separate inputs for the associative memory module. The authors argue that in that case the memories would interfere with each other. This is true if a weak associative memory, like the one considered in this work is used. However, if the authors used a modern Hopfield network such an interference would not be a problem. Specifically, consider the situation when after applying the FWM weights to $n$ and $e$ the results are passed through a steep non-linear activation function, like in Ref [1] (see for instance formula 10). This would suppress the interference between the memories and provide a nice memory recovery. Additionally, with these \\u201cstronger\\u201d models of associative memory the key vectors do not have to be orthogonal.\\n\\nExperimental results look fine, however, I think the work would benefit from some comparisons with other proposals for fast changing weights models, for example Ref [2]. \\n\\nI am not sure I understand the last paragraph on page 2. It is very easy to convert Modern Hopfield Networks from the autoassociative to heteroassociative type. One just needs to introduce additional matrices for queries, keys and values, like it is done in Ref [3] when comparing Modern Hopfield Networks with attention. Also, when referring to Modern Hopfield Networks, the reference for the original work, Ref [1], is missing.\", \"a_couple_of_presentational_suggestions\": \"1. Figure 1 seem to be inaccurate. In order to generate x_{t+1} one needs to take into account both the output of FWM and current state h_t. Only the first arrow is shown in figure 1.\\n\\n2. After equation 4, what is W_n? Looks like a misprint - should it be W_q? Also in the second line after equation 4 there are some misprints in the formulas.\", \"i_also_have_some_questions\": \"1. Typically most associative memory models converge to a fixed point if one runs them for a long time. It is not obvious to me if dynamical rules described by equations 1-3 converge to a fixed point after a sufficiently large number Nr of iterations. Do they converge to a fixed point or not? \\n\\n2. It looks to me that the results reported in table 2 indicate that LSTM without FWM have lower perplexity than LSTM with FWM on that task. At the same time, the authors seem to say in the text (second paragraph on page 8) the opposite. Could the authors please clarify this? \\n\\nI am willing to increase the scores for this submission if the questions/comments above are addressed.\", \"references\": \"[1] Krotov and Hopfield, NeurIPS 2016. Dense associative memory for pattern recognition, arXiv:1606.01164.\\n\\n[2] Ba, et al, NeurIPS 2016, Using fast weights to attend to the recent past, arXiv:1610.06258. \\n\\n[3] Ramsauer, et al., 2020. Hopfield networks is all you need, arXiv:2008.02217.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Official review #3\", \"review\": \"The solution proposed is the combination of an RNN (LSTM) and Fast Weighted Memory (FWM). The LSTM produces a query to the memory used to retrieve information from the memory and be presented at the model output. It also controls the memory through fast weights that are updated through a Hebbian mechanism. The FWM is based on Tensor Product Representations (TPR). The FWM is differentiable and builds upon the work of TPR-RNN from Schlag and Schmidhuber and Metalearned Neural Memory (MNM) by Munkhdalai et al. In the experimental section, the authors propose a concatenated version of the bAbI dataset to test their model with language modeling and question answering. Further the model is trained on a meta-learning task over POMDPs on graphs, and on language modeling on the PennTree Bank dataset. They show that the LSTM-FWM model generalizes better than without memory and similar models and with smaller capacity.\\n\\n======================================\\n\\nIndeed, the FWM model is relevant to this community and involves current scientific discussion and challenges. The paper is clear and is enjoyable to read. Math derivations and experimental results seem sound. Nevertheless, there are some clarity issues with the PTB language modeling task.\\n\\n======================================\", \"would_appreciate_if_the_authors_can_answer_to_the_following_questions\": \"How is the FWM (tensor $\\\\mathbf{F}_t$) initialized? How does the initialization influence training and performance?\\n\\nHow is Nr selected?\\n\\nWhat is the vocabulary size in catbAbI? Is the embedding layer learned or pre-trained?\\n\\n\\u201cThe experimental results in table 2 demonstrate a relative improvement over the AWD- LSTM baselines, which suggest the benefit of our FWM.\\u201d It is unclear what is the benefit in the PTB dataset. The results show that the LSTM model has slightly better perplexity (60.0 / 57.3) than the LSTM-FWM (61.39 / 59.37). Please, could you clarify the above note versus the numbers?\\n\\nDoes Figure 2 have missing details? The caption doesn\\u2019t seem to match the figure or it is unclear what authors are referring to.\\n\\nFigure 3 can benefit from using a bigger font for the node and edge values.\\n\\n======================================\\n\\nI'm inclined to accepting this paper. I found the idea simple but yet effective, and tested correctly in the experimental sections. Would appreciate it if the authors can improve the clarity surrounding Figure 2, and explain the misleading comment regarding the PTB task.\\n\\n======================================\", \"minor_issues\": \"-Page 2: \\u201cAn biologically\\u201d -> \\u201cA biologically\\u201d\\n\\n-Page 2: \\u201cpattern is is different\\u201d -> \\u201cpattern is different\\u201d\\n\\n-Page 5: Please correct with the missing number \\u201csuffered a TODO% drop\\u201d\\n\\n-Page 5: \\u201cfigure 4.1.1\\u201d -> \\u201cFigure 2\\u201d\\n\\n-Page 6: \\u201cnoteable\\u201d -> \\u201cnotable\\u201d\\n\\n==================================\\nUPDATE\\n\\nThank you for replying to my questions and clarifying in the document.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting method for fast memory, but the experiments are not totally convincing.\", \"review\": \"This paper presents a new method called Fast Weights Memory (FWM) to add an associative memory to an LSTM.\", \"model\": [\"FWM updates its fast weights through a differentiable perceptron like update at every step of an input sequence. The slow weights of the LSTM are instead updated only during training using gradient descent.\", \"FWM is based on previous work: TPR (Tensor Product Representation). TPR is a mechanism that uses tensor products to generate unique representations of combination of components.\", \"For long sequences FWM also has specialized components that allow it to update deprecated associations.\"], \"fwm_is_related_to\": \"* TPR-RNN: a sentence-level model for reasoning on text, achieving excellent results on bAbI.\\n* MNM (Metalearned Neural Memory): a word-level model which augments an LSTM with a FFNN as its memory, trained with a meta-learning objective.\\n\\nThe authors propose the new task \\\"catbAbI\\\", a variation of the existing task \\\"bAbI\\\". catbAbI seems to be mostly just a concatenation of the stories, questions and answers in bAbI into a single textual sequence. It's unclear how much harder catbAbI is compare to bAbI in principle.\\n\\nTPR-RNN and MNM are only trained for short sequences and so will have a hard time on catbAbI. The authors show that MNM in particular does poorly on the long sequences in catbAbI.\", \"results\": [\"good performance on catbAbI (language reasoning) -- but this is a new task, so no real baselines in other papers.\", \"good results meta-reinforcement-learning for POMDPs compared to LSTMs.\", \"good results on PTB language models, better than other published models, but not state of the art.\"], \"limitations\": [\"FWM requires an order 3 tensor, which scales poorly in both time and space computational complexity. This limits this work to relatively small models.\"], \"questions\": [\"catbAbi simply converts bAbI into a single sequence of tokens. Does this really increase the true difficulty of the task, or is it rather a way of artificially limiting the class of models used to solve the task to simple LM-like models? Is it possible to reconstruct bAbI from catbAbI with simple heuristics?\", \"Could you report results for FWM on bAbI? It\\u2019s pretty unclear at the moment how to compare the results on bAbI of FMW to the ones e.g. in cited \\u201cMetalearned Neural Memory\\u201d paper. Or at least results on a version of bAbI where predictions are run for each story separately, so that MNM is not as penalized for not being able to deal with long sequences of text.\", \"In figure 2, what does the color represent?\"], \"typos\": [\"Page 2:\", \"*A biologically more plausible\", \"*stateful weights that can adapt\", \"Most memory-augmented NNs are based *on content-based\\u2026\", \"Page 3:\", \"becomes a part of the *model's output.\", \"Figure 1: A simplified illustration of *our proposed method\", \"third-order tensor operations using *matrix multiplications\", \"Page 4:\", \"Wq \\u2192 Wn in equation (1)\", \"Page 5:\", \"there is TODO left\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review of \\\"LEARNING ASSOCIATIVE INFERENCE USING FAST WEIGHT MEMORY\\\"\", \"review\": \"Summary:\\n\\nThe authors present a working memory model composed of a recurrent neural network trained via gradient descent and an associative memory based on the approach taken by Ba et al. (2016) in \\\"Using Fast Weights to Attend to the Recent Past\\\". The model consists of an LSTM to which takes the input and its own state from the previous step to produce an output (or new state) which is then passed to a fast weight memory (FWM) module.\", \"the_application_of_the_fast_weights_are_decomposed_into_two_steps\": \"read and write. The write step composes the fast weight matrix update where new information is written into F given the LSTM hidden state and the fast weight matrix from the last step. The read step consists of potentially several recurrent \\\"inference steps\\\" over the FWM producing an output (e.g. a next step prediction or encoding).\\n\\nThe authors evaluate the model over two separate datasets. The first is a modified version of the bAbI dataset which concatenates separate bAbI stories together and can be trained and evaluated in either a language modelling (LM) mode or question-answering (QA) mode where knowledge about past facts must be utilized.\\n\\nStrengths & Weaknesses:\\n\\nThe problem itself is well motivated since associative inference is useful in solving problems that require an accurate working memory. Fast weight approaches allow us to learn to produce good state representations of the input sequence via slow weights (h_t) and where fast weights provide the associative mechanism to make important links across time. \\n\\nThe authors propose a new model that combines a novel read-write mechanism that relies on a number of inference steps over the fast weights allowing a nice disentanglement of read/write operations taking advantage of the associative inference to both add new relevant associative information (v) while also filtering stale data (v_old). \\n\\nThat said the overall form of the model doesn't seem fundamentally different from what is proposed by Ba et al. (2016) who also used fast weights as a way to attend over past hidden states in combination with a \\\"slow\\\" weighted RNN trained via gradient descent optimization albeit some of the details differ.\\n\\nFurther it would be helpful if the authors could clarify more around the rationale around why particular architectural choices were made. For instance, why are two keys are generated in the write operation? \\n\\nResults for both catbAbI don't seem to exceed the performance of the TransformerXL when comparing perplexities in both QA and LM mode and don't exceed TrXL accuracy in LM mode. However it is noted that the FWM model is in fact much smaller. It may have been useful to investigate the gated transformer XL which is known to exhibit stronger stability for RL. Figure 2 is nice though, is there any intuition why the reads vary among strong negative or positive activations as it seems to indicate?\\n\\nAs for the meta-RL problem it would have been nice to see comparisons to baselines other than an LSTM. For instance, Ritter et al. (2020) in \\\"Rapid Task Solving in Novel Environments\\\" introduce a model that combines an episodic memory with self attention to meta-learn how to explore and exploit navigation to goals in connected graphs.\", \"other_points\": \"The labelling for the edges Figure 3 isn't really clear.\\nThere's a missing reference in second to last paragraph on page 5: \\\"... QA-mode suffered a TODO% drop in accuracy ...\\\"\", \"recommendation\": \"I don't think there's enough here to recommend acceptance. For starters, I don't think there's quite enough in justification around the architectural choices of the model and exactly what distinguishes this from the model proposed by Ba et al. which also used fast weights in combination with a \\\"slow\\\" weighted RNN. Next, the results are not strong enough and additional or stronger baselines would have helped paint a better picture of the potential benefits of this approach. For the results in general, while I think that these results point in the possible direction of the utility of FWM I don't believe the paper in its current form demonstrate that FWM exceeds state of the art in the chosen domains in which it was evaluated. That said, I believe this is a promising line of research and encourage the authors to try to address the issues raised.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
1sJWR4y1lG | Deep Learning Is Composite Kernel Learning | [
"Chandra Shekar Lakshminarayanan",
"Amit Vikram Singh"
] | Recent works have connected deep learning and kernel methods. In this paper, we show that architectural choices such as convolutional layers with pooling, skip connections, make deep learning a composite kernel learning method, where the kernel is a (architecture dependent) composition of base kernels: even before training, standard deep networks have in-built structural properties that ensure their success. In particular, we build on the recently developed `neural path' framework that characterises the role of gates/masks in fully connected deep networks with ReLU activations. | [
"deep learning",
"kernel methods"
] | Reject | https://openreview.net/pdf?id=1sJWR4y1lG | https://openreview.net/forum?id=1sJWR4y1lG | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"YwizLtkQ2lO",
"qMYz4zicGJj",
"X6-IMyBkQSc",
"nvs_p4Ex5-a",
"IsQ5CHB9iz_",
"DJE2LYjDrer",
"LI6Z4tf659",
"yxk49SeaxVS",
"V6odCZ5MAsy"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040513607,
1605546280722,
1605544523810,
1605544135865,
1605543749646,
1604025350380,
1603862058190,
1603830052593,
1603797533137
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3466/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3466/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3466/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3466/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3466/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3466/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3466/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3466/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This paper provides a new perspective on deep networks by showing that NPK is composed of base kernels and their dependence on the architecture is explicitized. It is further shown that learning the gates can perform better than random gates.\\n\\nWhile the paper provides interesting understanding neural networks, it is unclear what practical benefit can be drawn from it. On the architectures considered such as FC, ResNet and CNN (btw, it seems restricted to 1-D), it will be important to show that such insights lead to new models or learning algorithms that improve upon the standard practice in deep learning (or get very close to). It is debatable whether drawing such a nontrivial insight alone warrants publication at ICLR, while \\\"nontrivial\\\" itself is a subjective judgement. I understand people differ in their opinions, and the NTK paper has been impactful. Unfortunately since there are quite a few other papers that are stronger, I have to recommend not accepting this paper to ICLR this time.\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"Thanks for your comments.\\n1. Prior works show that performance of standard finite width deep networks (our focus) > infinite width neural tangent kernel counterparts > infinite width GP counterparts. \\n\\\\begin{array}{|lc|}\\\\hline\\n\\\\text{Work}&\\\\text{CIFAR-10 (Test Acc.)}\\\\newline\\\\hline\\n\\\\text{[Lee et al., 2018]}& \\\\text{55.66%}\\\\newline\\n\\\\text{Fully Connected GP}&\\\\text{(45k points)}\\\\newline\\\\hline\\n\\\\text{[Novak et al., 2019]}& \\\\text{67.14%}\\\\newline\\n\\\\text{Convolutional GP}&\\\\newline\\\\hline\\n&&\\\\newline\\n\\\\text{[Arora et al., 2019]}& \\\\text{77.43%}\\\\newline\\n\\\\text{Conv. Neural Tangent Kernel}&\\\\newline\\n\\\\text{(CNTK)}&\\\\newline\\\\hline\\n\\\\text{[Lakshminarayanan}&\\\\text{67.1% (random gates)}\\\\newline\\n\\\\text{and. Singh, 2020]}&\\\\text{79.68% (learnt gates)}\\\\newline\\n&\\\\text{80.32% (standard CNN)}\\\\newline\\\\hline\\n\\\\end{array}\\n\\n2. In DNN with ReLU, the assumption that $\\\\Theta^v_0$ is statistically independent of $\\\\Theta^f_0$ does not hold. However, experiments show that statistically decoupling $\\\\Theta^v_0$ and $\\\\Theta^f_0$ does not degrade the test accuracy.\"}",
"{\"title\": \"Response to AnonReviewer4\", \"comment\": \"Thanks for the detailed comments. We address the main points below, and we will fix the cosmetic issues pointed out in the final draft.\\n\\n1. Width going to infinity: It is a correct observation \\u201cas the width goes to infinity the relationship tends to equality (up to a constant value)\\u201d.\\n\\n2. We are not training a linear model with the neural path features, but the model learning parameters in a network (this is the value network) in which the gating structure (provided by the feature network) is fixed.\\n\\n3. Results of layer permutation and 'all-ones' input: \\nThe values in the table in Fig 4 are already averaged over the combinatorially many models (24 layer permutations and 2 input configurations =48 models). We missed out mentioning this detail in the paper and thanks for pointing it out. Since the deviation was less than $0.5\\\\%$ we did not present them (as mentioned below the table in Fig 4). \\n\\n4. Robustness in DL regime: \\nIn the DL regime, we permute the masks (when using them in the value network) during training itself, this gives us 24 models. For each of these 24 models, the input can be set to be the image or `all-ones', during training. And robustness here means, that the test accuracy is more or less the same (within $0.5\\\\%$. deviation) for all these 48 different models. No, we do not claim that the results do not change if we permute the layers after training in the DL regime. The fact that we can permute the masks after training and retrain the NPV to recover performance is established in the FL regime.\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"Thanks for appreciating our contributions.\"}",
"{\"title\": \"Response to AnonReviewer1\", \"comment\": \"Thanks for your interesting and useful comments. We clarify that we are not proposing i) a new composite kernel method inspired by/built around deep nets, and ii) a new method to optimally extract features to be used for downstream tasks. Our goal is to understand standard finite width deep networks with ReLU activations, for which, we extend work of [Lakshminarayanan and Singh, 2020].\\n1. Comparison with GPs and SVMs: Prior works show that performance of standard finite width deep networks (our focus) > infinite width neural tangent kernel counterparts > infinite width GP counterparts. We are not proposing a new composite kernel method, instead we are showing via analysis that the neural path kernel (NPK) has a composite structure in itself. This is also the reason why we have not compared SVMs with composite kernels. \\n\\\\begin{array}{|lc|}\\\\hline\\n\\\\text{Work}&\\\\text{CIFAR-10 (Test Acc.)}\\\\newline\\\\hline\\n\\\\text{[Lee et al., 2018]}& \\\\text{55.66%}\\\\newline\\n\\\\text{Fully Connected GP}&\\\\text{(45k points)}\\\\newline\\\\hline\\n\\\\text{[Novak et al., 2019]}& \\\\text{67.14%}\\\\newline\\n\\\\text{Convolutional GP}&\\\\newline\\\\hline\\n&&\\\\newline\\n\\\\text{[Arora et al., 2019]}& \\\\text{77.43%}\\\\newline\\n\\\\text{Conv. Neural Tangent Kernel}&\\\\newline\\n\\\\text{(CNTK)}&\\\\newline\\\\hline\\n\\\\text{[Lakshminarayanan}&\\\\text{67.1% (random gates)}\\\\newline\\n\\\\text{and. Singh, 2020]}&\\\\text{79.68% (learnt gates)}\\\\newline\\n&\\\\text{80.32% (standard CNN)}\\\\newline\\\\hline\\n\\\\end{array}\\n2. Meaning and Interpretability of Kernels: [Lakshminarayanan and Singh, 2020] show that most information is stored in the gates, which makes it worthy to analyse the gates in a standalone manner. The NPK is solely related to the gates and sub-networks, unlike GPs related to outputs and neural tangent kernel (NTK) related to gradients. The relation NTK = const x NPK captures the role of gates analytically within the existing NTK framework. Also, NPK = $\\\\Sigma\\\\odot \\\\Lambda$ helps in interpretability. $\\\\Sigma$ is input Gram matrix, $\\\\odot$ is the Hadamard product, and $\\\\Lambda$ is physically interpretable since it measures the overlap between active sub-networks. Prior works rooted in kernels (GPs and NTK) propose and study an infinite width kernel. However, our focus here is to use the insights obtained from the NPK, and instead use the deep gated network (DGN) setup to experiments with different gating regimes in finite width networks (our focus). \\n\\n3. Interpreting models as BNNs and GPs: As we are not proposing any new models based on composite kernels, question of interpreting them as BNNs and GPs do not arise in the first place.\\n\\n4. Novelty of our work: As mentioned in the paper, the novelties are:\\n\\n a) We show that the NPK is composed of base kernels. The base kernels correspond to gating features of individual layers, and each gate is related to the hyperplane given by its incoming edges. $\\\\newline$\\nb) Setting $\\\\Sigma=$ matrix of all same entries (by giving `all ones' input to value network) does not degrade test accuracy. This shows that all the useful information is in $\\\\Lambda$, i.e., gates/sub-networks. \\nc) Our experiments challenge the hierarchical view of feature learning, wherein it is believed that, as one proceeds in depth, lower level to higher level features are learnt in the hidden layers of a deep network. We show that performance does not degrade if the masks are permuted, i.e., higher layer masks can be used first and lower layer masks in the end.\\n\\n5. 'How are Kernels Learnt?' translates to 'How are gates learnt?\\u2019. We are not proposing new methods to learn gates. The gates in standard finite width deep nets are learnt by training via off-the-shelf gradient methods.\\n\\n6. Neural path feature is a quantity arising in an analytical framework, and needs no additional procedure, unlike Layer-wise relevance propagation where one needs additional backward passing to compute the relevance of the pixels.\\n7. We added the most relevant references to this work, and we believe the work along with the references is self-contained. However, we will be happy include more references and discuss them in the related work.\\n\\nReferences\\n\\n[1] Jaehoon Lee, Yasaman Bahri, Roman Novak, Samuel S. Schoenholz, Jeffrey Pennington, Jascha Sohl-Dickstein, DEEP NEURAL NETWORKS AS GAUSSIAN PROCESSES, ICLR 2018.\\n\\n[2] Roman Novak, Lechao Xiao, Jaehoon Lee, Yasaman Bahri, GregYang, Jiri Hron, Daniel A. Abolafia, Jeffrey Pennington, Jascha Sohl-Dickstein, BAYESIAN DEEP CONVOLUTIONAL NETWORKS WITH MANY CHANNELS ARE GAUSSIAN PROCESSES, ICLR 2019.\"}",
"{\"title\": \"Ok paper, but needs better exposition and model details\", \"review\": \"Overview: The authors examine deep learning from the perspective of kernel methods and demonstrate that convolution layers in these architectures can make DNNs a form of composite kernel learning.\", \"significance\": \"Understanding and interpreting neural networks is an important problem in general; similarly extracting good features key to downstream performance of a neural network. Hence the paper tries to address some important and relevant problems in the field, however, I'm not fully convinced as to whether their procedure is any more interpretable than existing methods or extracts features optimally.\", \"quality_and_clarity\": \"While the work provides sufficient details to understand prior work and the method itself, the key contribution section of the paper needs more work.\", \"novelty\": \"There are many works that focus on understanding neural networks and learning features for downstream prediction from the perspective of kernels. The novelty in this work is limited to allowing gating functions to adapt during training, such that the learnt gates can perform better than random gates.\", \"pros\": \"1) Paper presents a potential solution to a relevant problem\\n2) Paper provides good overview of an existing method that form the basis of the approach.\", \"cons\": \"1) The most significant weakness of this paper is lack of thorough discussion about what the kernels actually mean in terms of understanding what the neural network is doing. How are these kernels learnt? For this, I think the authors need to make concrete comparisons with methods that are deeply rooted in kernels such as GPs or BNNs. For instance, does using a particular composite kernel structure give you the same predictive performance as when using a GP? Can we directly interpret such models as forms of BNNs or GPs? How does this work compare to more classic work that uses composite kernels in support vector machines? \\n\\n2) I would have also liked to have seen a better exposition of the interpretability of the method. How does using these composite kernels together compare to other approaches for interpreting deep neural networks like for instance layerwise relevance propagation?\\n\\n3) There is hardly any reference material which suggests the authors need to include a more thorough description and comparison of related work.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Paper proposes an extension of the neural path framework to include composite kernel (sum, product-sum and CNN pooling) learning and learnt gates to show for infinite kernel width gates are important than weights.\", \"review\": \"Paper proposes and extension of neural path framework to include composite kernels which comprise of a) FC networks (Hadamard product of gram matrices), b) residual networks (sum of products of base kernels), and c) CNN max-pooling layer. Furthermore, they also include learnt gates instead of static initialized random gates and show learnt gates perform better.\\n\\nPaper is well written with main technical contribution being theorem 5.1 which shows for infinite width case $w \\\\rightarrow \\\\infty $ the NTK is independent of the weights. It also presents experimental result on MNIST and CIFAR for four proposes regimes of (Definition 5.1) that models are robust to combinatorial variations in layers and inputs. This results in novel makes an important theoretical contribution towards understanding of why DNN with composite kernels perform well in practice.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Interesting work, bringing a new view on the neural tangent kernel\", \"review\": \"This paper builds on recent work characterising deep neural networks in terms of Neural Tangent Kernels and Neural Path Features. Over the past few years, a number of papers have developed the theory of Neural Tangent Kernels, which can be used to interpret infinite width deep neural networks in the context of a particular type of kernel. A recent paper (Lakshminarayanan and Singh, NeurIPS 2020) provided a new perspective on Neural Tangent Kernels for Gated Neural Networks, by decomposing the network into independent paths. For a fixed set of network weights, we can consider each path to give rise to a feature, corresponding to whether this path is active (i.e., is not switched off by one of the gates on the path). Then, the output of the neural network can be viewed as a weighted sum of active paths, equivalently the dot product of the neural path feature vector and a neural path value vector. Lakshminarayanan and Singh showed that under certain assumptions, a kernel defined in terms of the neural path feature is approximately equal to the neural tangent kernel (up to a constant). Specifically, they show that the value of the neural tangent kernel matrix tends to a constant multiple of the neural path kernel matrix as the width of the network goes to infinity. This suggests that the key component in a deep neural network with RELU activations is the gating structure, which defines active subnetworks, as opposed to the values.\\n\\nAs far as I can see, this work extends the analysis of (Lakshminarayanan and Singh, NeurIPS 2020) in two ways. Firstly, the analysis is extended to certain ResNet and Convolutional architectures, showing that in both of these cases we can relate the neural tangent kernel matrix to the neural path kernel matrix using a result analogous to Theorem 5.1 in (Lakshminarayanan and Singh, NeurIPS 2020). Secondly, they provide an interpretation of the neural path kernel as a composite kernel composed of layer-wise kernels, giving rise to the title of the paper.\\n\\nI was not very familiar with the work on neural tangent kernels and encountered (Lakshminarayanan and Singh, NeurIPS 2020) for the first time when reviewing this paper. As such, there were things which I didn't fully understand and may have misunderstood in my review. \\n\\nI have one question regarding the theoretical results in both papers. Theorem 5.1 (in both papers) relates the neural path kernel to the neural tangent kernel by showing that the neural tangent kernel for a network in which the gates have been fixed tends to a constant multiple of the neural path kernel as the width of the nerwork goes to infinity. This felt counter-intuitive to me at first reading, as fixing the gates and growing the width to infinity seem to be mutually exclusive. Is it correct to interpret the result as follows? For any fixed gating structure, there is a relationship between the neural path kernel matrix and the neural tangent kernel matrix for a network with that gating structure (i.e., one in which we are only learning the neural path values). As we allow the width go to infinity this relationship tends to one of equality (up to a constant multiple).\\n\\nI also have a number of questions regarding the empirical results in this paper.\\n1. In the experiments where we have fixed the weights, is the model learning parameters in a network in which the gating structure is fixed or is it learning the neural path value vector as part of a linear model?\\n2. The discussion mentions performance when we fixed the input gram matrix to be a constant in the definition of the neural path kernel (and hence define the neural path kernel in terms of the gating structure only), but does not include numerical results for this case. I understand that this may be due to a desire to keep the paper within the recommended 8 pages, but don't see why these results could not have been added in the appendix.\\n3. The discussion mentions performance when we permute the layers of the model and claims this is robust to permutation of the layers, but but does not include numerical results for this case. As above, I understand that this may be due to a desire to keep the paper within the recommended 8 pages, but don't see why these results could not have been added in the appendix. Moreover, I wasn't not sure what being robust to permutation of the layers means for the case where we are learning the both components of the DGN. Is this claiming that the results do not change if we permute the layers after training in the DL regime?\\n\\nAdditional comments\\n1. LS2020 is used in several places to refer to (Lakshminarayanan and Singh, NeurIPS 2020), the recent paper on which this builds. These should be corrected to match the format of the other citations.\\n2. Table 1 contains the information flow for the FCNN. Arguably, this is the simplest of the three architectures and an illustration of the information flow for a CNN could be more useful here. At the very least, the authors should direct the reader to Appendix A, where the CNN is described in more detail.\\n3. The authors refer to this work and the previous work of (Lakshminarayanan and Singh, NeurIPS 2020) as a \\\"paradigm shift in understanding deep learning\\\". While this work seems to be an interesting and promising line of research, I think it is fair to say that we will need to wait to see if it really does provide a paradigm shift in how we understand deep learning.\\n\\nIn gener`al, I think the neural tangent kernel is an interesting and promising line of research in the study of deep neural networks. The recent work of Lakshminarayanan and Singh (NeurIPS 2020) seems to add to this and this paper provides a relevant follow-up up that and as such is likely to be of interest to the ICLR community. However, I was did not check the proofs in the appendix or the appendix of (Lakshminarayanan and Singh, NeurIPS 2020), on which the results in this paper depend, and hope another reviewer more familiar with this line of work was able to do so.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Comments to \\\"Deep Learning Is Composite Kernel Learning \\\"\", \"review\": \"#### General Comments\\nThis paper establishes close relationship between CNN and FC-DNN with a composite kernel method. Specially, this paper shows that architectural choices such as convolutional layers with pooling, skip\\nconnections, make deep learning a composite kernel learning method, where the\\nkernel is a (architecture dependent) composition of base kernels. This interestingly indicates that standard deep networks have in-built structural properties that may explain their success before training them. \\nMoreover, this paper develops neural path framework to characterize the role of gates/ masks in FC-DNN. \\n#### Specific Comments\\n(1) It would be more interesting if some superiority of deep learning relative to kernel methods can be provided. \\n(2) Lakshminarayanan and Singh (2020) has developed a neural path framework in the NTK regime. Are there additional challenges when establishing these similar conclusions for DNNs with Relu activation?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}"
]
} |
xoHdgbQJohv | Multiscale Score Matching for Out-of-Distribution Detection | [
"Ahsan Mahmood",
"Junier Oliva",
"Martin Andreas Styner"
] | We present a new methodology for detecting out-of-distribution (OOD) images by utilizing norms of the score estimates at multiple noise scales. A score is defined to be the gradient of the log density with respect to the input data. Our methodology is completely unsupervised and follows a straight forward training scheme. First, we train a deep network to estimate scores for $L$ levels of noise. Once trained, we calculate the noisy score estimates for $N$ in-distribution samples and take the L2-norms across the input dimensions (resulting in an $N$x$L$ matrix). Then we train an auxiliary model (such as a Gaussian Mixture Model) to learn the in-distribution spatial regions in this $L$-dimensional space. This auxiliary model can now be used to identify points that reside outside the learned space. Despite its simplicity, our experiments show that this methodology significantly outperforms the state-of-the-art in detecting out-of-distribution images. For example, our method can effectively separate CIFAR-10 (inlier) and SVHN (OOD) images, a setting which has been previously shown to be difficult for deep likelihood models. | [
"out-of-distribution detection",
"score matching",
"deep learning",
"outlier detection"
] | Accept (Poster) | https://openreview.net/pdf?id=xoHdgbQJohv | https://openreview.net/forum?id=xoHdgbQJohv | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"9OZp8gLjwso",
"yGybtQK7c8k",
"Qd-RLNLg-gS",
"kVM1IURW2ZS",
"e7FkdKrmGMb",
"iwppQY4VF_i",
"QxrCkixDdXq",
"3DHf1uEYOIm",
"lDMeG7QJ2P5",
"ZPivueMgy-S",
"tX5fX0b2N4",
"gZEraxKlusv"
],
"note_type": [
"comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1615835181302,
1610040390661,
1606253234214,
1606243238755,
1606240940496,
1606240629025,
1606240342765,
1606190931173,
1604015018492,
1603906174889,
1603840833828,
1603284151548
],
"note_signatures": [
[
"~Ahsan_Mahmood1"
],
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3465/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3465/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3465/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3465/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3465/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3465/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3465/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3465/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3465/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3465/AnonReviewer4"
]
],
"structured_content_str": [
"{\"title\": \"Final Version\", \"comment\": \"Thank you to all the reviewers and the ACs for their time and helpful comments! We have uploaded our final camera-ready version. For posterity, we applied f-AnoGAN to our brain MRI out-of-distribution task and report results in the appendix. Since this experiment was not performed during the review period, we leave our original analysis unchanged and simply include these results as a reference for future readers.\"}",
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Poster)\", \"comment\": \"There seems to be some disagreement between Reviewers, with some borderline scores and some very good scores. After careful consideration of both reviews and answers, and after reading the updated version of the paper with some detail, I believe the approach is valuable. The use of scores for detecting out-of-distribution data is very novel and presents a number of opportunities for further research, both theoretically and empirically. Overall, my recommendation is to ACCEPT the paper. As a brief summary, I highlight below some pros and cons that arose during the review and meta-review processes.\", \"pros\": [\"Straightforward method. \\\"Trivial application\\\".\", \"Novel application to medical images.\", \"Robustness of default hyper-parameters.\", \"Future open sourcing of the code and model checkpoints.\", \"Topic highly relevant to the ICLR community.\", \"Well-written paper + relatively good visualizations.\"], \"cons\": [\"Lack of comparison with other existing approaches.\", \"Intuition/explanation/motivation on why the method works could be improved.\", \"Effect of hyper-parameters could be further discussed/analyzed.\", \"Concerns about applicability of the approach.\"]}",
"{\"title\": \"Uploaded Code\", \"comment\": \"We have uploaded a zip file containing the code we used to train NCSN and our auxiliary models. Additionally, we include notebooks for our main experiments in Section 5 and the 1D GMM analysis in Section 2.\"}",
"{\"title\": \"Thanks for the response. There are remaining concerns.\", \"comment\": \"Thanks for the response.\\nI wish there are more results rather than just text response.\\n\\nResponse #1 regarding robustness of hyperparameters is good..\\n\\nI see two problems with response #2. A) the baseline added is the simplest possible baseline for general image data that is known to be weak. In my original review, I suggested baselines like AnoGAN which had been shown effective on medical images. I'm not sure how the choice of baseline was made. B) You back up the argument that NCSN can scale to 3D data using unreported results. Without the additional results, this claim remains unjustified. \\n\\nOther minor edits and responses are fine. I would consider increasing the score if my remaining concerns can be further addressed. At the moment, the section about MRI data lacks empirical support. \\n\\nBest,\"}",
"{\"title\": \"Addressing the Concerns of AnonReviewer2\", \"comment\": \"We appreciate the comments you\\u2019ve made and would like to address some of your concerns.\\n\\n#1. Even though these datasets are trivial to separate by humans, we want to emphasize that these experimental scenarios (e.g. CIFAR vs SVHN) are the defacto standard in quantifying the performance for any out-of-distribution detector. Consequently, this out-of-distribution testbed has been used by [1], [2], [3], [4] and others. Moreover, Nalisnick et al. [5] showed that deep generative models such as Glow can in fact be fooled by outlying datasets, even for obvious cases such as CIFAR vs SVHN. Even methods specifically built for the purposes of out-of-distribution detection (such as ODIN) have struggled to accurately separate these easy-for-humans datasets. Therefore, we believe that achieving state-of-the-art performance in this landscape is still a worthwhile endeavor before moving on to more difficult scenarios.\\n\\n#2. We have added a hyperparameter analysis section (as highlighted in the manuscript). Our results show that the model is stable near our defaults, which perform near-optimal already. Furthermore, all our main experiments were run with the same defaults, showing that they do not need to be tuned on a per-dataset basis and can generalize well to different image data domains. Note the significant differences between CIFAR-10, SVHN, and brain MRI domains. Due to these reasons, we can recommend our defaults for various scenarios especially when anomalies are not known beforehand.\\n\\nReferences\\n\\n[1] Dan Hendrycks and Kevin Gimpel. A baseline for detecting misclassified and out-of-distribution\\nexamples in neural networks. ICLR, 2017.\\n\\n[2] Shiyu Liang, Yixuan Li, and R. Srikant. Enhancing The Reliability of Out-of-distribution Image\\nDetection in Neural Networks. 6th International Conference on Learning Representations, ICLR\\n2018 - Conference Track Proceedings, jun 2017.\\n\\n[3] J. Ren, Peter J. Liu, E. Fertig, Jasper Snoek, Ryan Poplin, Mark A. DePristo, Joshua V. Dillon, and\\nBalaji Lakshminarayanan. Likelihood ratios for out-of-distribution detection. In NeurIPS, 2019.\\n\\n[4] Lee, Kimin, et al. \\\"A simple unified framework for detecting out-of-distribution samples and adversarial attacks.\\\" Advances in Neural Information Processing Systems. 2018.\\n\\n[5] Nalisnick, Eric, et al. \\\"Do Deep Generative Models Know What They Don't Know?.\\\" International Conference on Learning Representations. 2018.\"}",
"{\"title\": \"Addressing the Comments of AnonReviewer3\", \"comment\": \"Thanks for all your comments! We have updated the paper to better reflect our potential next steps. One shortcoming of the method is the disjoint phases of learning i.e training the NCSN first and then the auxiliary models. We would like to explore the possibilities of joint training to improve the representations learnt by the NCSN. Additionally, we are also looking into the possibility of producing per pixel scores, which would allow us to generate heatmaps of anomalous regions in an image.\\n\\nPlease find our updated manuscript with all changed sections and some of your suggested edits highlighted with a colored sidebar.\"}",
"{\"title\": \"Addressing the Comments of AnonReviewer4\", \"comment\": \"Thanks for all the comments! We have addressed some of your concerns below.\\n\\n#1. We apologize for the unclear description, we updated our paper to reflect that $p(x)$ (likelihood) is indeed a good method for detecting outliers. In fact, having a very low likelihood with respect to the data distribution is a strong indicator of outlierness. \\nHowever, Nalisnick et al. showed that current deep likelihood methods struggle to produce low likelihoods for out-of-distribution samples. We present our method as an alternative to using $p(x)$ by considering scores (gradients of $\\\\log p(x)$ ) instead, with the added insight that we need to consider these scores at multiple scales of perturbation.\\n\\n#2. In Section 5.4, we compare our method against Likelihood Ratios. We have further added an additional experiment comparing our performance on FashionMNIST vs MNIST to Likelihood Ratios and ODIN in the Appendix.\\n\\n#3. We included a section looking at hyperparameter sensitivity. Our results show that the default hyperparameters seem to perform near-optimal, with minimal improvements after tuning. Furthermore, we would like to emphasize that all our main experiments in Section 5 utilized the same hyperparameters, regardless of the training dataset (including the brain MRI scans). This shows (at least empirically) that our defaults generalize well to different image data domains. \\n\\nThank you for the helpful suggestions, we have updated the paper accordingly. Please note that we highlighted all changed sections in the revised manuscript with a colored sidebar.\"}",
"{\"title\": \"Addressing the Concerns of AnonReviewer1\", \"comment\": \"Thank you for your in-depth comments! All your feedback was appreciated. Below we have tried to answer your concerns to the best of our abilities.\\n\\n#1. We acknowledge the need to determine whether our scheme is sensitive to its hyperparameters. To that effect, we have included a hyperparameter analysis in the paper. Our experiments show that the defaults already perform near optimal. Additionally, we would like to emphasize that all our main experiments were performed with the same hyperparameters despite the model being trained on different datasets (consider the difference between CIFAR and brain MRIs). This shows, at least empirically, that our defaults are generalizable to different image domains. Also, we have included an analysis on FashionMNIST vs MNIST in the appendix.\\n\\n In Section 2.1, we present a contrived toy example for illustrative purposes. We chose a significantly higher noise scale so that the difference in score norms was exaggerated and clearly identifiable. Further note that in this section we restricted ourselves to $L=3$ in order to plot each noise dimension. For all our main experiments, we kept the hyperparameters $L=10$ and $\\\\sigma_H=1$. We advocate for the use of these defaults as they empirically seem to generalize well to many OOD settings. For two of our auxiliary models (GMM and Flows), we do have access to likelihoods as an easy measure to tune performance. For KD trees, one could choose the Kth neighbour cutoff point according to the largest tolerable false positive rate for the application (which would require an inlier validation set only). We hope to evaluate MSMA on non-image domains in a future work.\\n\\n#2. You raise a valid concern about comparisons to a baseline in our MRI experiment. We have updated the paper with results comparing MSMA to the canonical OOD detection baseline introduced by Hendrycks and Gimpel (2017). We observe that MSMA generally outperforms it and observed it to be more stable across multiple runs.\\n\\nWhile we acknowledge Glow's generative capabilities on higher resolution images, we would like to emphasize that our goal is to extend the method to high resolution 3D MRIs, those that can reach 256x256x256. We have updated the paper to reflect this intention. Under this light, it is unclear whether models such as Glow can be easily extended to those regimes with reasonable engineering and computational costs. However, our (unreported) preliminary results show that MSMA works just as well on 3D samples as it did on the 2D MRI slices reported in the paper, with the same hyperparameters. More importantly, generative models like Glow already struggle to detect out-of-distribution samples in low-resolution domains like CIFAR vs SVHN (as shown by Nalisnick et al. 2018). It is difficult to say whether the situation would improve when looking at much higher resolution 3D images. \\n\\n#3. We apologize for lack of clarity in Section 2. $p(x)$ is indeed important in identifying outliers but NCSN outputs gradients of $\\\\log p(x)$ (the score). It is unclear how to remove the numerator as it is only implicitly contained in our scores. Your idea of training likelihood models at different scales is a useful comparison and we plan to pursue that research direction in the future.\\n\\nYou raise a good question about why multiple scales are important, which may not have come across in the paper. We are not guaranteed that one scale will work for all outliers. Consider outliers close to inlier modes e.g. outliers between Low-Density outliers and Inliers in Fig 2. Our large scale results in an overlap in the score distribution of inliers and Low-Density outliers. This makes it difficult to detect the aforementioned \\\"in-between\\\" outliers from the inliers. However, this large scale was necessary to get a big enough neighborhood context in order to capture the further away Local-Mode outliers. Thus, all three scales in the range would be necessary. We have updated Section 2 to clarify this intuition. The idea of using multiple scales for detecting local inlier modes is indeed very interesting. We leave such an analysis for future work.\\n\\n#4. We are fully committed to open sourcing the code, the paper will be updated with the GitHub repo in the final version. Unfortunately, we are not allowed to redistribute the medical data. However, it is all publicly available at nda.nih.gov. As a compromise, we plan on making the model checkpoints available in the GitHub repo once the review period is over.\\n\\nFinally, thank you for the minor comments, we have corrected our paper accordingly. Note that we highlighted the changed sections in the revised manuscript with a colored sidebar.\\n\\nReferences\\n\\nDan Hendrycks and Kevin Gimpel. A baseline for detecting misclassified and out-of-distribution\\nexamples in neural networks. ICLR, 2017.\\n\\nNalisnick, Eric, et al. \\\"Do Deep Generative Models Know What They Don't Know?.\\\" International Conference on Learning Representations. 2018.\"}",
"{\"title\": \"Limited novelty, insufficient experimental or theoretical analysis\", \"review\": \"This paper apply multi scale score estimates to out-of-distribution detection. They demonstrate the usefulness of multi scale estimates and adopt auxiliary model to identify outlier data. The proposed method is evaluated on two different settings and is effective for out-of-distribution detection.\", \"strength\": [\"The motivation of the proposed method is clear. The proposed method makes sense.\", \"The proposed method is quite simple. Seems easy to implement.\"], \"weakness\": \"1. The writing of the paper needs further improvement. This paper is based on denoting auto encoder and Noise Conditioned Score Network. But the introduction of these important works is not very clear. \\n\\n2. The novelty of the method is marginal. They apply previous multi scale score estimate method on out-of-distribution detection settings. Such application is trivial.\\n\\n3. Experiment settings in the paper is quite simple. The proposed method is not rigorously studied in complex datasets. The improvement of previous works for separating SVNH and MNIST is not signifiant. The method doesn't compare with previous works when applying on brain scan images.\\n\\n4. Important theoretically analysis is missing. The proposed method has several important hyper parameters: number of scales, sigma value for each scale, etc. Real data distribution could be very complex, in this case, how to select these parameters? Discussions about how to the effect of these parameters are missing. \\n\\n-------\", \"update_after_rebuttal\": \"I appreciate the efforts of providing a hyper parameter study. Thanks for the clarification about dataset used in the paper. \\nI would like to increase my rating from 4 to 5. Since the proposed method is somewhat ad-hoc (shared concern among other reviewers), either experimental or theoretic analysis is important to understand when and why it works. However, I don't think these analysis are sufficient in current form.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Very good paper on a highly relevant topic\", \"review\": \"#### Summary:\\nThe authors leveraged and repurposed Noise Conditioned Score Network (NCSN) that was originally introduced by Song & Ermon (2019) for generative modeling to be used for detection out-of-distribution (OOD) images. The authors unfold the intuition and rationale behind score matching followed by the equivalence of denoising autoencoder (DAE) to derive NCSN as a score estimator and provide an analysis to demonstrate the value of multiscale score analysis. In an experimental analysis on SVHN and CIFAR datasets they demonstrate superiority of their method (MSMA) over previously reported findings in the literature using state-of-the-art models (ODIN, JEM, Likelihood Ratios) on OOD task.\\n\\n##########################################################################\\n#### Reasons for score: \\nI vote for accepting. While the objective foundation of the methodology is adapted from previous work, I find the repurposing of it for fast and effective OOD detection novel and meaningful. The authors have structured and communicated their findings remarkably and provided a well designed experimental evidence to support the methodology for the detection of OOD images task. \\n \\n##########################################################################\\n#### Pros: \\n \\n1. The paper addresses a relevant issue of OOD images detection using norms of score estimates and is highly relevant to the ICLR community. \\n \\n2. The multiscale score analysis was very well done and very well communicated. The visualizations captured very well the essence of the findings and were well highlighted in in the discussion. It was clear, useful and it well justified the following method development. \\n \\n3. This paper provides comprehensive experiments, well related to the scientific context, to show the effectiveness of the proposed method. The additional performance metrics in the appendix provide a well complementary supprot. \\n \\n##########################################################################\\n#### Major comment: \\nWhile the paper is overall very well written, structured and communicated, I found the final discussion and conclusion quite lacking. 1) The claim that autoencoding taks better suits deep CNNs should be a bit more elaborated/ demonstrated. 2) The sentence on the \\u201cpeculiar phenomenon exhibited by multiscale score estimates\\u201d is also not fully clear. It would be better if the authors explicitly mention to which phenomenon they relate. 3) I would find it important to add to the discussion a paragraph on the paper limitations, for example, any limitations the datasets present, limitations on the applied comparisons, limitations of the method application or others. 4) While the authors mentioned their plan to apply the methodology on a specific task, I think the discussion on future directions quite lacking. Are there other potential next steps that can be done on top of the proposed method? The analysis on range of scales mentioned in the end of section 2.1 could be an example of that. 5) As a minor suggestion, the authors may consider to relate to any wider impact of their work.\\n\\n#### Minor comments:\\n\\nAt two points in the manuscript the authors mentioned a future application of the method to identify atypical morphometry in early brain development. Since this experimental analysis was not actually done, I found it quite distracting and out of the scope of this paper. I would therefore suggest removing it from both introduction and discussion. \\nSection 5.3, I would suggest to briefly mention what preprocessing was done on *_all_images_.\\n \\n##########################################################################\\n#### Questions during rebuttal period: \\n \\nPlease address and above comments.\", \"rating\": \"9: Top 15% of accepted papers, strong accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Official Blind Review\", \"review\": \"Summary:\\nThey proposed a new method of OOD detection, MSMA which uses a new generative model [NCSN] and a 2nd phase of fitting a simple density model on the vector of likelihoods at the various scales.\\nThey showed empirically good results on standard OOD image datasets (CIFAR10 vs OOD, SVHN vs OOD etc.). They were able to achieve perfect separation at most settings, and much improved results for CIFAR10 vs SVHN compared to previous unsupervised methods\\nThey showed interesting application for detecting OOD in medical images where the inliears are scans for 9-11 years of age, and OOD are <9 years of age.\", \"strength\": \"MSMA is straightforward, and clearly described. Since it\\u2019s based on a fairly well tested generative model, the part of getting multi-scale likelihood from NCSN should be fairly robust, and reproducible. \\nApplication on medical images is novel, could potentially benefit the ICLR audience if the dataset is released.\", \"concerns\": \"#1 Robustness of method (i.e. sensitivity to hyperparameters) \\nMSMA introduces an auxiliary model, which introduces extra hyperparameters, e.g. number of components in GMMs. \\nAlso, as the authors pointed out, choosing different noise scales for NCSN gives vastly different results in terms of OOD detection. \\nIn the multi-scale case, there is a high degree of freedom in how to choose the various noise scales. \\nIn Figure 1b, even in the multi-scale case MNIST and FashionMNIST seemed to have overlapping score vectors. It would be good if thorough results for this pair are included in the experiment section. \\nA.2 presents somewhat contradicting descriptions to Section 2.1. A.2 states that all experiments are done with the largest noise scale of 10, whereas in S2.1 they said it\\u2019s only effective at a noise scale of 20. \\nThis raises the concern of how applicable this method is to domains not studied in [NCSN]. E.g., on non-image OOD tasks e.g. those in [SEBM]. How would one choose the scale schedule in general? \\nUnlike Flows, VAEs, and GANs where likelihood can be used to do model selection (e.g. using AIS), it\\u2019s unclear how to do model selection with NCSN. This makes me wonder if the range of hyperparameters used for the auxiliary models is generally applicable, in the case that the base NCSN model is trained with very different hyperparameters. \\n\\n#2 A somewhat restricted coverage of existing methods\\nBoth in the introduction and conclusion the authors emphasize how MSMA is developed with the application of medical image OOD detection in mind. They dismiss comparison to density methods by saying they cannot be used with their high-resolution images. This is simply not true. [Glow] can easily learn images at 256x256, whereas the images here are only 110x90. Also, another very popular family of methods for OOD detection in medical images are those related to [fAnoGAN]. GANs are more than capable of learning images of these scales. \\n\\nProviding a new and meaningful application of OOD detection such as the MRI dataset provided here is a good contribution, but it seems to me that the authors did not attempt to compare to other methods, but only tried to show MSMA somewhat works. \\n\\n#3 Incomplete understanding of the method\\nSection 2 tries to provide some intuition about the effectiveness of the method. However, the analysis is quite brief. Here I try to list a few questions:\\n Most of the reasoning of how the \\u201cscore\\u201d is intuitively useful is based on how the \\u201cdensity\\u201d appears in the denominator. This makes me wonder if the numerator (\\u201cgradient of the density\\u201d) is of any importance, or maybe we can improve the method if that term is removed. One obvious thing to compare to here is just train a Flow, or VAE at different noise scales and compare to them.\\nFigure 2 and section 2.1 kind of explains why a large noise scale is useful, but not why using multiple scale is useful. Why not show using a single best scale in the experiment section? \\nFigure 2 uses the construction of a local-model outlier to justify why large-scale is needed, but does this construction really translate to the real world scenario? Is it possible to show the difference in prediction on the real datasets when using different scales, much like in the toy setting? If so, this method can also be useful for selecting the local-mode outliers in the image setting, which could inspire new applications\", \"minor_comments\": \"In Section 5.4. \\u201c \\u2026 is not tackled by classifier based OOD detectors \\u2026 \\u201c, this is wrong. [Lee] and many works after does study this. \\nTable 2 caption is not describing Table 2\\n\\nOverall, the method is simple and effective on the CIFAR10 benchmark. It\\u2019s possible that this method is a worthy contribution. However, I\\u2019m not sure about how generally applicable this method is because I don\\u2019t see experiments in different settings, ablations studies, and/or adequate understanding of why MSMA is better than other unsupervised methods. For the MRI task, the author did not compare to relevant baselines. Lastly, the authors show no intention in open sourcing their code/dataset, which undermines the value of an empirical study.\", \"references\": \"[NCSN] Song, Yang, and Stefano Ermon. \\\"Generative modeling by estimating gradients of the data distribution.\\\" Advances in Neural Information Processing Systems. 2019.\\n[SEBM] Shuangfei Zhai, Yu Cheng, W. Lu, and Zhongfei Zhang. Deep structured energy based models for anomaly detection. In ICML, 2016.\\n[Glow] Kingma, Durk P., and Prafulla Dhariwal. \\\"Glow: Generative flow with invertible 1x1 convolutions.\\\" Advances in neural information processing systems. 2018.\\n[LikelihoodRatio] J. Ren, Peter J. Liu, E. Fertig, Jasper Snoek, Ryan Poplin, Mark A. DePristo, Joshua V. Dillon, and Balaji Lakshminarayanan. Likelihood ratios for out-of-distribution detection. In NeurIPS, 2019.\\n[fAnoGAN] Schlegl, Thomas, et al. \\\"f-anogan: Fast unsupervised anomaly detection with generative adversarial networks.\\\" Medical image analysis 54 (2019): 30-44.\\n[Lee] Lee, Kimin, et al. \\\"A simple unified framework for detecting out-of-distribution samples and adversarial attacks.\\\" Advances in Neural Information Processing Systems. 2018.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting work on OOD detection, but could be improved by a more intuitive explanation, and more analysis.\", \"review\": \"Thank you for the clarifications.\\nI did not change my rating, since I am unclear how the proposed method compares to SOTA beyond CIFAR-10/SVHN.\\nTable 14 suggests that Likelihood Ratios is considerably better than the proposed method.\\nFurthermore, neither in Table 1, nor in Section 5.4, I can find any results of the Likelihood Ratio method.\\n----\\nThe paper addresses the problem of detecting out-of-distribution (OOD) samples at test time, i.e. samples which belong to a class for which there was no training data.\\nFor that purpose the authors propose to represent each sample x using $||s(x,\\\\sigma_1)||, ..., ||s(x,\\\\sigma_L)||$, where $s(x,\\\\sigma) = \\\\nabla_x \\\\log q_{\\\\sigma}(x)$, and $q_{\\\\sigma}$ is the the original model probability $p(x)$ + gaussian noise with variance $\\\\sigma^2$. They call this L-dimensional space the score norm space.\\nThe authors experimentally show that OOD samples tend to be rather distinct from in-distribution samples in the score norm space.\\nThey exploit this, and propose to train either a Gaussian Mixture Model, Autoregressive Flow, or k-nearest neighbor model with the training data's score norm space representation.\", \"strong_points\": \"- On CIFAR-10/SVHN they show that their method performs better than the Likelihood Ratios methods from (Ren et al 2019).\\n- On several other baseline datasets they show that their method performs better than Confidence Thresholding (DeVries & Taylor, 2018) and ODIN (Liang et al 2017).\\n\\n\\nUnclear/Weak points:\\n\\n- The proposed method is quite ad-hoc. Therefore, it would be helpful to include some experimental/theoretic analysis of why the method works, and when it does not work.\\nThe authors try to provide some intuition in Section 2.1, though the explanation seems confusing to me:\\non page 2, the authors argue that a small value of $p(x)$ is not a good method to detect outlier samples (referring to Nalisnick et al 2018), \\nbut the Toy example in Section 2.1, page 3, discusses how their method can detect samples for which $p(x)$ is low. \\n\\n- The experimental results would be more convincing if their method were compared to a recent method like Likelihood Ratios (Ren et al 2019) also on other datasets than CIFAR-10/SVHN.\\nFor example, (Ren et al 2019) also showed results for FashionMNIST/MNIST.\\n\\n- How sensitive is the method to the choice of L and other hyperparameters?\", \"minor\": [\"In the reference list the authors should at least add the conference name to each publication.\", \"\\\"has been observe\\\" -> - \\\"has been observed\\\"\", \"\\\"other unseen datasets It is important\\\" -> \\\"other unseen datasets. It is important\\\"\", \"\\\"loglikelihoods\\\" -> \\\"log-likelihoods\\\"\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
Zc36Mbb8G6 | Data Instance Prior for Transfer Learning in GANs | [
"Puneet Mangla",
"Nupur Kumari",
"Mayank Singh",
"Vineeth N. Balasubramanian",
"Balaji Krishnamurthy"
] | Recent advances in generative adversarial networks (GANs) have shown remarkable progress in generating high-quality images. However, this gain in performance depends on the availability of a large amount of training data. In limited data regimes, training typically diverges, and therefore the generated samples are of low quality and lack diversity. Previous works have addressed training in low data setting by leveraging transfer learning and data augmentation techniques. We propose a novel transfer learning method for GANs in the limited data domain by leveraging informative data prior derived from self-supervised/supervised pre-trained networks trained on a diverse source domain. We perform experiments on several standard vision datasets using various GAN architectures (BigGAN, SNGAN, StyleGAN2) to demonstrate that the proposed method effectively transfers knowledge to domains with few target images, outperforming existing state-of-the-art techniques in terms of image quality and diversity. We also show the utility of data instance prior in large-scale unconditional image generation and image editing tasks. | [
"GAN",
"transfer learning",
"fewshot learning",
"image generation"
] | Reject | https://openreview.net/pdf?id=Zc36Mbb8G6 | https://openreview.net/forum?id=Zc36Mbb8G6 | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"MTl1O7FNOe6",
"W19FHwzaWNo",
"3TdX7wH-GJ8",
"HXGABEGt47B",
"q-L_bYkZb0K",
"L750_Q6ooky",
"2acDg-rHgL7",
"oon5Nihf3xZ",
"zljEUjgWYu3",
"TFaPtWq-J7x",
"4ae1pqab0r",
"4Ec335jbOZ-",
"hSCtanfL2fP"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040363210,
1606299230847,
1606298811047,
1606298745078,
1606298449746,
1606298282462,
1606297998371,
1606297828379,
1606297173724,
1603945658069,
1603904581842,
1603902296869,
1603733838854
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3462/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3462/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3462/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3462/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3462/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3462/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3462/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3462/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3462/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3462/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3462/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3462/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"The paper proposes to use a feature extractor (encoder) $C(x)$, pre-trained with label supervision or contrastive learning on a large image dataset, to both regularize the discriminator's last feature layer $D_f(x)$ and encode the data $x$ itself as the conditional input of the generator $G(z|G_{emb}(C(x)))$. The main purpose is to help the training of GANs when there is a limited number of images in the target domain. A clear concern of this approach is that to generate a fake image, one will need to first sample a true image, making the model unattractive if the training dataset size is large (need to store the whole training dataset even after training). To mitigate this issue, the authors propose to fit up to 200k randomly sampled $G_{emb}(C(x))$ with a GMM with 1k components. To validate the practice of requiring a GMM (a shallow generative model) to help a GAN (a deep generative model) to generate, the authors have done a rich set of experiments under state-of-the-art GAN architectures or training methods (SNGAN, BigGAN, StyleGAN2, DiffAugment) to illustrate the efficacy of the proposed data instance prior and its compatibility with the state-of-the-art methods in a variety of settings. In the AC's opinion, the paper is missing references to 1) related work that combines VAE (or some other type of auto-encoder) and GAN, which often helps stabilize the GAN training [1,2,3], 2) VAE with a VampPrior [4], and 3) more broadly speaking, empirical Bayes related methods where the prior model is learned from the observed data (see [5] and the references therein). The potential advantages of using a VAE rather than a GMM to help a GAN to generate include: 1) there is no need to store 1k GMM components, which may require a large amount of memory; 2) there is no need to subsample the training set; and 3) the VAE and GAN can be jointly trained. The AC recommend the authors to discuss the connections to these related work in their future submission.\\n\\n[1] Larsen, Anders Boesen Lindbo, et al. \\\"Autoencoding beyond pixels using a learned similarity metric.\\\" International conference on machine learning. PMLR, 2016.\\n\\n[2] Zhang, Hao, et al. \\\"Variational Hetero-Encoder Randomized GANs for Joint Image-Text Modeling.\\\" International Conference on Learning Representations. 2019.\\n\\n[3] Tran, Ngoc-Trung, Tuan-Anh Bui, and Ngai-Man Cheung. \\\"Dist-gan: An improved gan using distance constraints.\\\" Proceedings of the European Conference on Computer Vision (ECCV). 2018.\\n\\n[4] Tomczak, Jakub, and Max Welling. \\\"VAE with a VampPrior.\\\" International Conference on Artificial Intelligence and Statistics. PMLR, 2018.\\n\\n[5] Pang, Bo, Tian Han, Erik Nijkamp, Song-Chun Zhu, and Ying Nian Wu. \\\"Learning Latent Space Energy-Based Prior Model.\\\" Advances in Neural Information Processing Systems 33 (2020).\"}",
"{\"title\": \"Common Response to Reviewers/AC\", \"comment\": \"We thank all reviewers for insightful and constructive feedback. We are encouraged to note that reviewers found the approach of Data Instance Prior (DIP) as interesting/convincing (R2,R3,R4); extensive/effective quantitative and qualitative results (R2,R3,R4) and the approach makes sense (R1,R3). We have corrected all the typos and addressed any lack of clarity in the paper\\u2019s final version.\", \"below_we_provide_an_overview_of_the_major_updates_made_in_the_paper\": [\"Rewritten methodology section for increased clarity of training and inference stages. We have also added a pseudo-code for our algorithm DIP.\", \"Added a comparison table that shows improvement with styleGAN2 and BigGAN architecture over DiffAugment technique by varying the amount of training dataset (100%, 20%, 10%) on CIFAR10 and CIFAR100 datasets in Appendix Table 9.\", \"Added an ablation experiment by varying the choice of the loss (hinge, non-saturating, and Wasserstein loss) for GAN training as shown in Appendix Table 8.\", \"Updated Table 1 by comparing the results when transfer learning (initialization with pre-trained model weights) is not applied.\", \"Added a figure (as Figure 2) for comparison with DIP that suggests discriminator overfitting in the baseline approach when training is done in limited data setting.\"]}",
"{\"title\": \"Response to Reviewer 4\", \"comment\": \"We would like to thank the reviewer for providing valuable feedback for the paper. We are pleased to know that the reviewer finds our experiments and DIP\\u2019s application effective. We now answer the raised concerns.\\n\\n**Q1: How can VGG pretrained network also help for Anime dataset ?**\\n\\nTo examine the usefulness of Vgg features on Anime dataset, we evaluate it on the anime character classification task. We took a subset of 70k images from the Anime Face dataset that had labels assigned among the 50 character tags. Each character tag has around 1000-1500 images. We train a single linear classifier on VGG-16 features of 50k samples and evaluate it on the rest 20k samples. We observe an accuracy of ~75% and ~67% on training and test sets respectively. When a single linear classifier is trained upon SimCLR features, the respective accuracies were ~81% and ~63.5%. This highlights that even for fine-grained and out of domain distributions like Anime, pretrained VGG-16 features are semantically rich enough to achieve a decent classification score.\\n\\n\\n**Q2: As for the experiments, it lacks a comparison with results that transfer learning is not applied.**\\n\\nWe have now added the results of baseline training when weights of the network are randomly initialized (denoted as Scratch) and Scratch with DIP in the few-shot setting in Table 1. We observe that FID scores are better than TransferGAN+DIP and interpolation between conditional prior is shown in Fig 7.\\n\\n| | Anime | Faces | Flower |\\n|-|-|-|-|\\n|Scratch | 120.38 | 140.66 |124.02 |\\n|+DIP | 66.85 | 68.49 | 94.22 |\\n\\n\\nWe also show comparison of DIP with baseline without transfer learning (i.e random initialization of model weights) for limited data setting on CIFAR-100 with BigGAN architecture while varying the dataset size. We have added these results in the Appendix Table 9.\\n\\n| CIFAR-100 | 100% | 20% | 10% |\\n|-|-|-|-|\\n|BigGAN Baseline | 20.37 | 33.25 |42.43 |\\n|BigGAN +DIP | 12.28 | 21.70 | 31.48 |\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"We would like to thank the reviewer for providing constructive feedback for the paper. We are pleased to know that the reviewer finds our DIP approach interesting and our experiments extensive in the paper. We now address the raised concerns and provide results for the suggested experiments.\\n\\n**Q1: IMLE description in (2) is vague. It seems that a gaussian sample $z$ is taken, but on the other hand, for a given network $G$, $z$ is optimized to match $x$. The restrictions on $z$ are unclear.**\\n\\nWe have re-written the IMLE description and its corresponding Eq. 2 to add the restrictions on $z$ in the paper and we hope that this will increase its clarity. In IMLE, $z$ is sampled from a normal distribution, and during each update generator is optimized to reduce the distance between real sample x and its nearest neighbor $G(z)$ from that mini-batch of sampled $z$\\u2019s.\\nEq 2 of IMLE is not used either during training or the inference stage for our methodology. Our final training objective is now mentioned in Eq3 that is the addition of real/fake loss with projection loss similar to c-GAN [1].\\nAlso, we have updated our Methodology section by modifying the Eq3 with the final loss of real/fake loss with projection loss and added a pseudo-code of our DIP training.\\n\\n**Q2: It seems that hinge-loss is used for GAN based on (1)? It would be important if the authors could comment on the choice of the divergence/loss in this setting. One may wonder that the limited data and mode-collapse could be better handled with Wasserstein distance.**\\n\\nAs suggested by the reviewer, we also ran an experiment to analyze the role of loss/divergence used during GAN training. The table below shows the FID score of models trained using (1) Non-saturating loss (NS) (2) Wasserstein loss (W) and (3) Hinge loss (H) (used in the paper).\\n \\n| Method || Anime ||| Faces ||\\n|-||-|||-||\\n| | NS | W | H | NS | W |H |\\n|FreezeD | 102.43 | 148.99 | 109.40 | 105.34 | 209.23 | 107.83 |\\n| FreezeD + DIP-Vgg16 | 82.49 | 74.91 | 93.36 | 73.38 | 71.05 | 77.09 |\\n|DiffAug |106.96 | 252.11 | 85.16 | 107.18 | 325.85 | 109.25 |\\n|DiffAug + DIP-Vgg16 |48.61 | 56.43 | 48.67 | 68.66 | 81.03 | 62.44 |\\n\\nWe use gradient penalty with Wasserstein loss in FreezeD but not in DiffAugment as it already consists of consistency regularization loss and using both leads to unstable training. Wasserstein loss works significantly better in the case of FreezeD+DIP but worse when used with the DiffAugment training strategy. We have added this ablation experiment\\u2019s result in the Appendix section table 8 in our paper.\", \"reference\": \"[1] Takeru Miyato and Masanori Koyama. cgans with projection discriminator. In International Conference on Learning Representations (ICLR), 2018\"}",
"{\"title\": \"Response to Reviewer 2 continuation\", \"comment\": \"**Q6: In table 1, the result when combining FreezeD on Flower is low. Could the authors explain it?**\\n\\nFor experiments on the Flowers dataset (that is a subset of Flowers dataset containing only the \\\"passion\\\" class of flower), DIP performs only slightly better (TransferGAN, DiffAugment, and BSA) or worse (for e.g. in FreezeD by FID metric). Here, FID is calculated using only 251 real images from the reference distribution as compared to 10k/7k separate test-set available for the Anime and Faces datasets respectively. We believe that FID calculated using less number of real images makes it to be a less reliable indicator of the generator\\u2019s performance [4].\\nWe also observe that just by memorizing given 100 training images it is possible to achieve an FID of 66.91. On analyzing the generated images by the baseline model, we observe that it overfits to given 100 training images with poor interpolation between conditional embeddings as shown in Fig 6 in the paper. We believe that because of this overfitting, the FID metric of the baseline is better when the sample size of real images is small. In the table below, we show the FID of the baseline and DIP model.\\n\\n| | Baseline | DIP | Train set |\\n|-|-|-|-|\\n| FID |91.80 | 120.43 | 66.91 |\\n\\nTo further analyze this, we conduct an ablation experiment, where we create another dataset of Flowers which we call Flower-Diverse. This dataset is created by randomly sampling one image from 100 out of 102 classes of oxford flowers dataset, hence creating a 100 training image dataset. Remaining images from these 100 classes (~8000 images) are used as images from real distribution in FID calculation. Here, we observe the benefit of DIP with baseline approaches as shown below on the Flower-Diverse dataset.\\n\\n|Flower-Diverse | FreezeD |DiffAugment |\\n|-|-|-|\\n| Baseline | 90.01 | 82.01 |\\n| +DIP-Vgg16 | 86.55 | 50.56 |\\n\\n**Minor comments:**\\n\\n(1) Is TransferGAN cited correctly in the section of Baselines and Datasets in page 6 ?\\n\\n We thank the reviewer for pointing this out.\\n\\n(2) What is the x and hat of x in E.q 3.\\n\\nAs in GANs, each training step comprises of $d_{steps}$ of Discriminator update and a single step of Generator update, a popular choice for $D_{step}$ is 4, and the same is used for our BigGAN experiments. We denote $x$ and $\\\\tilde{x}$ as images sampled from the real distribution for $G$ and $D$ updates respectively. For our implementation purpose where $D_{steps} = 4$ and $G_{step} = 1$, we take x same as the last batch (i.e 4th batch in case when $D_{steps}=4$) of sampled images from true distribution for $D$ update, thus $x$ is a subset of $\\\\tilde{x}$. We have now removed this notation from our paper since the updated Eq 3 now mentions loss for $G$ and $D$ separately and have added a pseudo code in the methodology section for clarity.\\n\\n(3) In the abstract, Is it 'with limited success' or 'with limited data'?\\n\\n It\\u2019s with limited data. We have rectified this in the paper.\", \"references\": \"[1] Karras T, Aittala M, Hellsten J, et al. Training generative adversarial networks with limited data[J]. arXiv preprint arXiv:2006.06676, 2020.\\n\\n[2] Zhao S, Liu Z, Lin J, et al. Differentiable augmentation for data-efficient gan training[J]. arXiv preprint arXiv:2006.10738, 2020.\\n\\n[3] Takeru Miyato and Masanori Koyama. cgans with projection discriminator. In International Conference on Learning Representations, 2018\\n\\n[4] Esther Robb et al. Few-Shot Adaptation of Generative Adversarial Networks, arXiv preprint arXiv:2010.11943, 2020\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"We would like to thank the reviewer for providing valuable feedback for the paper. We are pleased that the reviewer finds our approach interesting and the experiments effective. We now clarify the raised concerns.\\n\\n**Q1 : Does the final objective contain E.q 2? E.q 3 seems to contain two losses?**\\n\\nOur final training objective is now mentioned in Eq3 that is the addition of real/fake loss with projection loss similar to c-GAN [3].\\n\\n**Q2 : Is E.q 2 used at inference time which is combined to the Inference section? I fail to connect the inference section to the whole paper.**\\n\\nApologies for this confusion. No, Eq 2 of IMLE is not used either during training or at the inference stage of our methodology. As DIP trained models are essentially conditional GANs, during the inference stage it requires both a data instance prior $C(x)$ and $z$ as inputs to generate images ie. $G(z| C(x))$.\\n\\nIn the case of few-shot generation, as there are a few training images (~100), we simply save the corresponding training image\\u2019s prior features and perform interpolation in this prior feature set to obtain more conditional prior for inference (Eq. 5 in paper).\\nIn the case of large-scale training, storing features of the complete training set becomes memory inefficient, therefore to avoid saving the large set of training priors, we propose to learn a distribution over the training image priors (using clustering or GMM) to enable sampling of priors (Eq 6 in the paper).\\nBased on the above concerns, we have rewritten our Methodology section to incorporate these comments by updating the Eq3 with the final loss of real/fake loss with projection loss and adding a pseudo-code of our DIP training.\\n\\n**Q3: If E.q 2 is utilized in this paper, what is different to BSA?**\\n\\nWe don\\u2019t use E.q 2 in our paper. We mention IMLE since it serves as motivation for our approach of having an instance level condition that promotes images generated using prior $C(x)$ of image $x$ (ie. $G(z|C(x))$ ) to be close to $x$ in discriminator space but the specific formulation as described in Eq 2 is not used in our methodology.\\n\\n**Q4: How well does StyleGAN2 work with projection loss?**\\n\\nIn comparison with BigGAN, we observe that performance improvement of projection loss is less in StyleGAN2 architecture in the case of few-shot data settings.\\nTherefore, we have also added results on using our conditional architecture of styleGAN2 as described in Appendix for CIFAR10 and CIFAR100 datasets which shows improvement over DiffAug[2] method in limited data setting:\\n\\n| Method || CIFAR-10 ||| CIFAR100 ||\\n|-||-|||-||\\n| |100% | 20% | 10% | 100% | 20% | 10%|\\n|StyleGAN2 DiffAug | 9.89 | 12.15 |14.5 |15.22 | 16.65 | 20.75 |\\n|StyleGAN2 DiffAug+DIP | 9.50 | 10.92 | 12.03 | 14.45 | 15.52 | 17.33 |\\n\\n\\nWe note that the Authors of ADA [1] have also shared results on using projection loss in StyleGAN2 architecture for CIFAR10 for class conditional image generation.\\n\\n**Q5: What is the role of design and hence the goal of both G_emb and D_emb? Do they just map the extracted embedding to the same dimension required by conditional GANs? What are linear transformation matrices?**\\n\\nYes, the goal of both $G_{emb}$ and $D_{emb}$ is to map the extracted embedding to the dimension required by conditional GAN as used in BigGAN architecture.\\n\\nG_emb maps the extracted prior to a shared embedding space which is then used as input to all conditional batch-norm layers in the generator. It reduces the dimension of shared embedding space (usually 128) as compared to the dimension of the extracted pre-trained feature prior (512 for Vgg-16 and 2048 for SimCLR). \\nD_emb maps the extracted prior to the same dimension as discriminator features to apply projection loss similar to class embedding matrix in cGANs[3].\"}",
"{\"title\": \"Response to Reviewer 1 (Question 8-10)\", \"comment\": \"**Q8: Show via experiments that the proposed method DIP can prevent discriminator overfitting.**\\n\\nWe have included a similar analysis for discriminator overfitting as done in [1,2] in our Methodology section as Fig 2. We train a baseline and DIP model on 10% of CIFAR-100 dataset with BigGAN architecture. We report the real/fake discriminator score on the training/validation/generated samples during the course of training. In baseline training, the discriminator score of real images keeps on increasing while the discriminator score on validation images and FID quickly degrades. This suggests overfitting in training. However, in the case of Baseline+DIP training, the discriminator score on training and validation images remains similar and higher than the generated data\\u2019s discriminator score. Also, the FID value keeps on decreasing and saturates instead of abruptly increasing as the training progresses.\\n\\n**Q9: What\\u2019s the motivation to do interpolation on the data instance prior? How to make sure that the interpolation on the prior (Eq.4) is smooth?**\\n\\nAs we observed in our experiments that the prior $C(x)$ controls the high-level details and the latent code $z$ influence the fine-grained details of images in $G(z|C(x))$. In the few-shot setting, we only have access to few data priors (~100). To generate more priors as to generate images with high diversity, we leverage the interpolation of priors.\\nWe observe smooth interpolation in the trained model between data priors without enforcing any explicit constraint as shown in Fig. 3. This shows the generalization of our DIP trained model in the prior space.\\n\\n**Q10: On semantic diffusion for image manipulation. It is not clear how to obtain the results shown in Figure 4/9 on custom editing, e.g. there is only one input image, how to compute and exchange the $C(x)$? Also, the effects of manipulating high-level semantics and fine-grained details are not observed/discussed in the experiments.**\\n\\nTo perform semantic diffusion image x is manipulated (cut-mix) or picked from out of domain (sketches) and its pre-trained feature $C(x)$ from Vgg-16/SimCLR network is directly used as prior condition in GAN.\\nFor example in cutmix, given Image $I_1$ and Image$I_2$, we generate a new image that is a cutmix version of $I_1$ and $I_2$. Let the modified image be $I_x$ = cutmix($I_1$, $I_2$) which is shown as the first image in Fig5/10. To generate images similar to $I_x$ or perform semantic diffusion, we calculate the prior representation of this modified image as $C(I_x)$. Then, we use this prior as input to generate images $G(z| C(I_x))$ where z is randomly sampled from the standard normal distribution. In Fig 5, the first column corresponds to cutmix images $I_x$ and the 2nd and 3rd columns represent images generated via $G$ using the prior $C(I_x)$.\\nWe have shown results in Fig 9 where changing $z$ and keeping the prior fixed leads to change in fine-grained details of the generated image and interpolation in prior leads to high-level semantic changes.\", \"references\": \"[1] Karras T, Aittala M, Hellsten J, et al. Training generative adversarial networks with limited data[J]. arXiv preprint arXiv:2006.06676, 2020.\\n\\n[2] Zhao S, Liu Z, Lin J, et al. Differentiable augmentation for data-efficient gan training[J]. arXiv preprint arXiv:2006.10738, 2020.\\n\\n[3] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In CVPR, 2018.\\n\\n[4] Takeru Miyato and Masanori Koyama. cgans with projection discriminator. InInternational Conference on Learning Representations, 2018\"}",
"{\"title\": \"Response to Reviewer 1 (Question 5-7)\", \"comment\": \"**Q5: When the data is large-scale, doing clustering becomes inhibitive.**\\n\\nYes, we agree that clustering is inhibitive for large-scale datasets e.g. ImageNet and LSUN-Bedroom where the training data is in the order of millions. We observed this during our experiments and therefore we have reported the performance of K-means/GMM using a subset of randomly sampled 200K instances for these datasets in our paper. We have added this detail in the paper now.\\nBelow we show the relationship between the number of random samples used for fitting GMM/K-means and the corresponding FID (average of 3 runs with a standard deviation of less than 1%) on the LSUN-Bedroom dataset for DIP-Vgg16 trained model on LSUN-Bedroom.\\n\\n| | 50k | 100k | 200k | 500k | $D_{prior}$ (3 M) |\\n|-|-|-|-|-| - |\\n| GMM | 4.99 | 4.92 |4.81 | 4.43 |3.77 |\\n|Time (in secs) | 383.96 | 1063.99 | 1993.93 | 4397.56 | - |\\n| K-means | 3.84 | 4.20 |4.72 | 5.36 | - |\\n| Time (in secs) | 210.67 | 546.57 | 1344.08 | 7072.34 | - |\\n\\nThis experiment was performed on a system with 32 CPU cores, 64 GB RAM, and processor Intel(R) Xeon(R) CPU @ 2.20GHz. We will be happy to include these details in the Appendix for clarity.\\n\\n**Q6: In Table 3, the results are not SOTA on CIFAR10 and fall behind the recent works on generation with limited data, e.g. DA [1], DiffAug [2]. What is the advantage of DIP when compared to these methods?**\\n\\nAuthors of [1,2] use random horizontal flip augmentation for training. In our paper, we have reported FID scores for experiments that do not use horizontal flip augmentation. We observe that we can achieve an unsupervised (i.e not using the class labels) FID score of $\\\\\\\\textbf{9.70}$ and $\\\\\\\\textbf{12.89}$ on CIFAR-10 and CIFAR-100 respectively on BigGAN architecture by utilizing this augmentation and keeping all other hyper-parameters the same as in Table 3. This is better than the FID score of 9.89 and 15.52 as reported in [2] on StyleGan2 architecture for CIFAR-10 and CIFAR-100 respectively. In the following table, we also show the benefit of our approach when used with DiffAug [2] technique on stylegan2 architecture. We have added these results in Table 9 of the Appendix.\\n\\n| Method || CIFAR-10 ||| CIFAR100 ||\\n|-||-|||-||\\n| |100% | 20% | 10% | 100% | 20% | 10%|\\n|StyleGAN2 DiffAug | 9.89 | 12.15 |14.5 |15.22 | 16.65 | 20.75 |\\n|StyleGAN2 DiffAug+DIP | 9.50 | 10.92 | 12.03 | 14.45 | 15.52 | 17.33 |\\n\\nWe also run DiffAug on unconditional BigGAN architecture for comparison with DIP. Below, we show improvement of using DIP in conjunction with DiffAug [2] on CIFAR-100, while varying the amount of data used in training on BigGAN architecture with random-horizontal flip augmentation and other hyperparameters same as in Table 3 of our paper.\\n\\n| CIFAR-100 | 100% | 20% | 10% |\\n|-|-|-|-|\\n|BigGAN DiffAug | 13.33 | 19.78 | 23.80|\\n|DiffAug+DIP | 12.70 | 16.91 |20.47|\\n\\nWe have shown that our methodology DIP can be effectively combined with DiffAugment in limited /few-shot data settings for improved performance as shown in Tables 1 and 2.\\n\\n**Q7: An ablation study is suggested to highlight the contribution of each part (e.g. knowledge distillation, covering real data modes) in the final performance of DIP.**\\n\\nDuring our DIP training methodology in the few-shot setting, knowledge transfer can be done as\\n\\n(a) Initialization of GAN\\u2019s weight from a pre-trained GAN\\n\\n(b) Using conditional data priors for images extracted from a pre-trained network like Vgg-16.\\n\\nThe contribution of (a) is shown in Table 1, where we have added results when the network weights are initialized from scratch (randomly initialized) vs pre-trained GAN weights as initialization for DIP training.\\n\\nTo analyze the contribution of (b), we compare it to the two versions of the baseline approach.\\nFirstly, baseline-unconditional which is an unconditional GAN (replacing conditional batch-norm with standard batch-norm in SN-GAN) and secondly a modification of baseline training named baseline-Embedding where we learn embedding (i.e. initialized from scratch) for each image during training similar to DIP approach (but in Baseline-Embedding the priors are learned rather than being derived/distilled from some pre-trained network as done in DIP). This shows the isolated advantage of knowledge distillation as priors in GAN training. Below we show the table of FID scores for comparison with DiffAugment as the baseline training strategy for 100 shot image generation on Anime, Faces, and Flower datasets.\\n\\n| Method | Anime | Faces | Flower |\\n|-|-|-|-|\\n|DiffAug-unconditional |160.18 | 154.30 | 136.32 |\\n| DiffAug-Embedding | 85.16 | 109.25 | 83.45 |\\n| DiffAug + DIP-Vgg16 | 48.67 | 62.44 | 79.86 |\\n\\nWe note that all the baseline FID scores reported in Table 1 correspond to Baseline-Embedding approach as mentioned in Appendix since it performs better than Baseline-unconditional approach for our experiments in terms of the FID score.\"}",
"{\"title\": \"Response to Reviewer 1 (Question 1-4)\", \"comment\": \"We would like to thank the reviewer for providing insightful ideas and constructive comments on the paper.\\nAt the outset, we wish to point out that we have comprehensively addressed (rewritten) all the concerns regarding clarity in our methodology (which seemed to be the major concern across the reviews). We have also introduced an algorithm to show the implementation details of our method clearly. We now address each of the concerns pointed out by the reviewer.\\n\\n**Q1: The sampling process should be placed under \\u201cExpectation\\u201d rather than \\u201cminimize\\u201d in Eq3. What parameters are getting optimized in the loss objective in Eq 3? What is the relationship between $x$ and $\\\\tilde{x}$ ?**\\n\\nYes, we thank the reviewer for pointing out the correct position of the sampling process. We have updated Eq. 3 in our paper to reflect the objective being optimized for both $G$ and $D$ parameters. The final discriminator output which is optimized adversarially for $G$ and $D$ is the addition of real/fake loss with projection loss between discriminator feature and conditional prior embedding. We have also added a pseudo-code in our Methodology section to further clarify our training process of DIP.\\nGenerally, each training step of GAN comprises of $d_{steps}$ of Discriminator update and a single step of Generator update. We denote $x$ and $\\\\tilde{x}$ as images sampled from the real distribution for $G$ and $D$ updates respectively. For our implementation purpose where $d_{steps} = 4$ and $G_{step} = 1$, we take $x$ as the last batch (i.e 4th minibatch) of sampled real images for $D$ update, thus making $x$ a subset of $\\\\tilde{x}$. We have now removed this notation from our paper since the updated Eq 3 now mentions loss for $G$ and $D$ separately.\\n\\n\\n**Q2: Figure 1 is confusing, it shows that the adversarial loss is dependent on the projection loss. What is the equation that shows the relationship between these two losses?**\\n\\nYes, the adversarial loss depends on the projection loss. The final loss for our DIP approach is the addition of Real/Fake loss and projection loss similar to the cGAN objective [4] that is adversarially optimized. We have modified Figure 1 in our paper to remove this confusion and have updated Eq 3 that reflects this.\\n\\n**Q3: What is the final equation for the adversarial training for DIP? Also, a detailed training process is necessary, e.g. how to sample, when to optimize the generator, discriminator, and the embedding networks?**\\n\\nApologies for this confusion. We have updated Eq 3 to contain the full objective loss with respect to each component of GAN, and have now included the pseudo-code (algorithm) of DIP in our Methodology section 4 as Algorithm 1 that describes our training process in detail.\\n\\n**Q4: How to make sure that \\u201cenforcing feature Df(G(z|C(x))) to be similar to Demb(C(x)) ensures that for each real sample, there exists\\u201d as the constraint is made on latent space and not in the image space?**\\n\\nDuring training using DIP, images generated with condition $C(x)$ are enforced to have it\\u2019s discriminator feature close to $D_{emb}(C(x))$. Fig 9 in the Appendix qualitatively shows that for a model trained using DIP, the image generated using $C(x)$ i.e. $G(z|C(x))$ is semantically closer to the image $x$. We have also quantitatively measured this as performance on the Recall and IvoM metric (Table 4), and observe that it outperforms the baseline methods.\\n\\nTo validate the concern regarding the equivalence of closeness in latent and image space, we measure the correlation between cosine similarity in Discriminator feature ( $D_f(.)$ ) and Vgg-16 feature (perceptual similarity) space. Vgg-perceptual similarity [3] is an accepted measure of image similarity and has been used in generative models like IMLE, GLANN, BSA as a proxy for constraints in image space. Additionally, we also report the correlation between cosine similarity in Discriminator feature space and $L_2$ closeness measure in the image space. The table below reports our findings where we observe a high positive correlation between cosine similarity in $D_f$ and VGG perceptual similarity; and a moderate negative correlation between cosine similarity in $D_f$ and $L_2$ distance in Image space.\\n\\n| Pearson Correlation | Anime | FFHQ | CIFAR-10 |\\n|-|-|-|-|\\n| $D_f$ cosine vs VGG Perceptual | 0.65 | 0.81 | 0.80 |\\n| $D_f$ cosine vs Image $L_2$ | -0.46 |-0.61 | -0.54 |\\n\\nTo quantitatively verify that G(z|C(x)) is close to x in the trained model, we also show below the perceptual similarity between the two as compared to a random pair of images.\\n\\n| Cosine Similarity | Image $x$ and its conditional generated image $G(z\\\\|C(x))$ | Random Pair |\\n|-|-|-|\\n| VGG perceptual space| 0.512 $\\\\pm$ 0.067 | 0.382 $\\\\pm$ 0.050 |\\n|Discriminator\\u2019s feature space| 0.59 $\\\\pm$ 0.096 | 0.50 $\\\\pm$ 0.070 |\"}",
"{\"title\": \"A transfer learning method for GANs in the limited data domain\", \"review\": \"This work proposed a transfer learning method for GANs in the limited data domain. It borrow ideas from IMLE (to overcome mode-collapse) and conditional GAN (to improve training stability and generation quality), by introducing data instance prior (plays a similar role to that of label information in conditional GAN) and knowledge distillation techniques, the model is claimed to be effective in preventing mode collapse and discriminator overfitting.\\n\\nThough the main idea makes sense to some extent, the writing is a little weak, especially the equations and some statements that are not correctly verified in the experiments, making it difficult to go through the paper. \\n\\nDetailed concerns are listed below.\\n\\n1. Eq 3 is not correct, the sample process should be placed under \\u201cExpectation\\u201d rather than \\u201cminimize\\u201d; also, there is no information about which parameter is going to be optimized here. What\\u2019s the relation ship between x and x~, are they independently sampled from the target dataset? \\n\\n2. Figure 1 is confusing. It shows that the adversarial loss is depend on the projection loss, but there is no equation to show the relationship between these two losses.\\n\\n3. There are many network components in the proposed method, it is not easy to guess the objective related to the real/fake score in an adversarial manner, a clear equation for the adversarial training is necessary. Also, a detailed training process is necessary, e.g. how to sample, when to optimize the generator, discriminator, and the embedding networks?\\n\\n4. How to make sure that \\u201cenforcing feature Df(G(z|C(x))) to be similar to Demb(C(x)) ensures that for each real sample, there exists\\u201d? Apparently, the constrain is made on some latent space and there is no statement to show that close in latent space is equivalent to close in image space.\\n\\n5. When the data is large scale, doing clustering is inhibitive.\\n\\n6. On Table 3, the results are not state of the art on CIFAR10 and fall behand the recent works on generation with limited data, e.g. discriminator augmentations (DA) [1], DiffAugment [2], then what\\u2019s the advantage of the proposed method when compare to these methods?\\n\\n7. An ablation study is suggested. It\\u2019s not easy to see the contribution of each part (e.g. knowledge distillation, covering real data modes) to the final performance.\\n\\n8. It claims that the proposed method can prevent discriminator overfitting, but in the experiments, it is not shown. \\n\\n9. What\\u2019s the motivation to do interpolation on the data instance prior? How to make sure that the interpolation on the prior (Eq.4) is smooth? \\n\\n10. On semantic diffusion for image manipulation. It is not clear how to obtain the results shown in Figure 4/9 on custom editing, e.g. there is only one input image, how to compute and exchange the C(x)? Also, the effects of manipulating high-level semantics and fine-grained details are not observed/discussed in the experiments.\\n\\n\\n[1] Karras T, Aittala M, Hellsten J, et al. Training generative adversarial networks with limited data[J]. arXiv preprint arXiv:2006.06676, 2020.\\n[2] Zhao S, Liu Z, Lin J, et al. Differentiable augmentation for data-efficient gan training[J]. arXiv preprint arXiv:2006.10738, 2020.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"\", \"review\": \"This submission deals with transfer learning for training GANs with limited label data. The challenge is that training with limited data can result in mode collapse. This submission proposes to use data priors for each instance of the target distribution, transformed through knowledge from a source domain, as conditional information in GAN to ensure mode coverage of the target data distribution. A pre-trained feature extractor is used to provide the information to condition the GAN. A range of experiments is performed with the features extracted from VGG16, SIMCLR. They show consistent improvements in the image quality and diversity, measured via FID and precision-recall, for few-shot, limited data, and even large scale data settings.\\n\\nStrong points\\n- The idea of creating useful guides based on pre-trained features for conditional GANs makes sense.\\nThe experiments are extensive and they show improved quality and diversity for a wide range of settings.\\n\\nCorrectness\\n- The idea and the reported experiments make sense. \\n\\nReproducibility\\n- The code has been provided with the code for pre-training and other details included. \\n\\nMore suggestions and comments\\n- IMLE description in (2) is vague. It seems that a gaussian sample z is taken, but on the other hand, for a given network G, z is optimized to match x. The restrictions on z is unclear.\\nIt seems that hinge-loss is used for GAN based on (1)? It would be important if the authors could comment on the choice of the divergence/loss in this setting. One may wonder that the limited data and mode-collapse could be better handled with Wasserstein distance. An ablation study would be very useful to clarify the role of the distance.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"GAN with Transfer Learning\", \"review\": \"This paper illustrates how they train GANs with small sample sizes with the help of Transfer Learning. The paper tackled a very specific problem: what should we do with a small sample training size if we want to train a GAN. The authors have supported their arguments by a proof in Data In Prior and experiment results. They illustrated well in both aspects.\", \"here_are_my_point_of_views\": \"Transfer learning is a good way to help GANs when sample size is limited. but I have two concerns over this paper:\\n1. The datasets are very popular in the filed of GAN, however, for the Anime one, I am just curious how the VGG pretrained network can also help. \\n\\n2. As for the experiment, it lacks of comparison with results that transfer learning is not applied. \\n\\nGenerally, the paper is good and it can help data augmentation in other applications.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Data Instance Prior for Transfer Learning in GANs\", \"review\": \"##########################################################################\", \"summary\": \"The paper focuses on improving the performance of training generative adversarial networks (GANs) with limited target data. With the low diversity and quality when traing GANs with few data, the paper proposes to use data instance prior to reduce the overfitting. Specially, taking the target sample as input, the data prior is extracted by a pre-trained network / self-supervised model , and then mapped into the embedding by both G_emb and D_emb. The former acts as the class embedding, and the latter is the image embedding combined with the discriminator. Authors also extend the proposed method into the large dataset, and provide the cluster method or a Gaussian Mixture Model. The quantitative and qualitative result support the proposed method\\n\\n##########################################################################\", \"pros\": [\"The idea of utilizing the informative data prior to help train GANs is interesting. Authors leverage the pre-trained model to extract data prior, and combine it with conditional GANs. Basically, SNGAN is considered in this peper, which uses the projection loss instead of cross-entropy loss to perform conditional image generation. Based on form of the projection loss, authors are able to avoid the problem the label of target data, and directly use the extracted embedding to conduct image generation.\", \"For large dataset, authors also provide a simple and effective method to get the semantic embedding.\", \"For experiment, the paper uses current SOTA methods (BigGAN, SNGAN and StyleGAN2) and a series of datasets to evaluate the proposed method. Besides, all latest methods (to my best knowledge) is compared to the proposed method, even the unpublished papers (DiffAugment), which indicates that the proposed method is effective and convincing .\", \"##########################################################################\"], \"cons\": [\"For me, the paper is so clear to understand, and miss a few information.\", \"(1) Does the final objective contains E.q 2 ? From the description above E.q 3 and the architecture , it seems to contain two losses.\", \"(2) Is E.q 2 used at inference time which is combined to the Inference section? I fail to connect the inference section to the whole paper.\", \"(3) If E.q 2 is utilized in this paper, what is different to BSA? From my point, BSA assigns the noise and the real sample pair, and optimize the input noise as well as the parameter of the batchnorm, but fails to consider the adversarial loss, which results in generating blur image. In this paper, authors additionally consider the adversarial loss, and improve the reality of the synthesized image.\", \"With conditional GANs is selected in this paper, I am wondering how to combine to the StyleGAN2, although the description is provided in Appendix. To be honest, I am not sure it works well when combining StyleGAN2 with project loss.\", \"What is the goal of both G_emb and D_emb? Do them just map the extracted embedding to the same dimension required by conditional GANs? authors mention that it is on-linearity or linear transformation matrices for different GANs frameworks, and varying for varying GANs architectures. What is linear transformation matrices? what is the role to design both G_emb and D_emb?\", \"In table 1, the result when combining FreezeD on Flower is low. Could authors explain it?\"], \"minor_comments\": \"(1) Is TransferGAN cited correctly in section of Baselines and Datasets in page 6 ? I think it is the one [1], which is first paper to perform transfer learning for GANs with limited data.\\n\\n(2) What is the x and hat of x in E.q 3. \\n\\n(3) In abstract, authors mention ' Previous works have addressed training inlow data setting by leveraging transfer learning and data augmentation techniques with limited success'. Is it 'with limited success' or 'with limited data'? \\n\\n[1] Transferring gans: generating images from limited data.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
e6hMkY6MFcU | WordsWorth Scores for Attacking CNNs and LSTMs for Text Classification | [
"Nimrah Shakeel"
] | Black box attacks on traditional deep learning models trained for text classifica- tion target important words in a piece of text, in order to change model prediction. Current approaches towards highlighting important features are time consuming and require large number of model queries. We present a simple yet novel method to calculate word importance scores, based on model predictions on single words. These scores, which we call WordsWorth scores, need to be calculated only once for the training vocabulary. They can be used to speed up any attack method that requires word importance, with negligible loss of attack performance. We run ex- periments on a number of datasets trained on word-level CNNs and LSTMs, for sentiment analysis and topic classification and compare to state-of-the-art base- lines. Our results show the effectiveness of our method in attacking these models with success rates that are close to the original baselines. We argue that global importance scores act as a very good proxy for word importance in a local context because words are a highly informative form of data. This aligns with the manner in which humans interpret language, with individual words having well- defined meaning and powerful connotations. We further show that these scores can be used as a debugging tool to interpret a trained model by highlighting rele- vant words for each class. Additionally, we demonstrate the effect of overtraining on word importance, compare the robustness of CNNs and LSTMs, and explain the transferability of adversarial examples across a CNN and an LSTM using these scores. We highlight the fact that neural networks make highly informative pre- dictions on single words. | [
"cnns",
"lstms",
"scores",
"word importance",
"wordsworth scores",
"text",
"single words"
] | Reject | https://openreview.net/pdf?id=e6hMkY6MFcU | https://openreview.net/forum?id=e6hMkY6MFcU | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"_GqW6y8KdnP",
"hh2h8N6Ao7p",
"5BoUXutrHcv",
"uKCxHjNEfr",
"mg4XZRnLvW",
"AzecD1-FePN",
"TSguygLK89s"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040350508,
1606251404393,
1606250326278,
1606247085743,
1604235321862,
1603894275824,
1603664559894
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3461/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3461/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3461/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3461/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3461/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3461/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"The authors propose a method for attacking neural NLP models based on individual word importance (\\\"WordsWorth\\\" scores).\\u00a0 This is an interesting, timely topic and there may be some interesting ideas here, but at present the paper suffers from poor presentation which makes it difficult to discern the contribution. Presentation issues aside, it seems that the experimental setup is missing key baselines (an issue not sufficiently addressed by the author response).\"}",
"{\"title\": \"Response to the points raised in the review\", \"comment\": \"Thank you for your review. Here is our response to all the points raised in the review:\\n\\n1. We have updated the paper for improved presentation.\\n\\n2. When traditional deep learning models(CNNs and LSTMs) are used, leave-one-out or some variant of it is the only technique that is used for feature importance, as outlined in the literature for attacks/interpretation.\\n\\n3. We have added the learning rate, learning algorithm and embedding initialization method to the current revision. The architecture for all the experiments has been mentioned in the relevant sections.\\n\\n4. Training vocabulary size of 5000 was a randomly chosen parameter, as was the test subset to be attacked.\\n\\n5. In all experiments, the attack success rate for the original algorithms(greedy and delete one) is always better. However our technique is, on the surface, a very rough measure of actual word importance. We are claiming that in a paragraph of 200 words, the importance of any word can be determined approximately by removing 199 words and evaluating a classifier on just this single word. The decrease in success varies across architectures too(CNNs perform worse than LSTMs). Also notice that most of the times, the difference in performance is less than what would have been achieved with the original technique by perturbing one additional feature/word. However, we have modified the wording in the abstract to highlight the fact that there is some decrease in attack success.\\n\\n6. This is a valid point, and it would be interesting to see how this kind of approach works on other NLP tasks. However, we have limited our analysis to text classification(sentiment analysis and topic classification). The technique is most suitable for scenarios where a classifier learns to associate words with one label from a subset of labels. To determine grammaticality, context is extremely important, and our technique completely ignores context, to sacrifice accuracy for efficiency.\\n\\n7. For interpreting a trained model, we show how to find important words for each class, provide lists in the appendix, and highlight that our model has learned that 'martha' is the 9th most important word for category 'Business' in the AG News dataset, which could potentially be a mistake we do not want our model to make. \\nWe also show that these scores explain transferability between a CNN and an LSTM. Our argument here has two parts: that scores from CNN and scores from LSTM have very high correlation(~0.88), and scores from CNN can be used to attack LSTM(which is the transferability phenomenon), and that the former explains the success of the latter.\"}",
"{\"title\": \"Some clarifications\", \"comment\": \"Thank you for your review. Here is our response to the questions raised in the review:\\n\\n1. Our contribution lies in highlighting that the predictions on a single word, while deleting the rest of the input, act as a surprisingly good proxy of feature importance. This is directly opposite to all the current approaches to measuring word importance, where a particular word is deleted from input sample and the change in prediction is used as importance score. This leave-one-out(LOO) method is the dominant technique in black box attacks as well as interpretation techniques(detailed literature review mentioned in the paper).\\n\\n2. For feature importance, we agree that we have only compared with the leave-one-out method, and there are other interesting approaches. However, we limit our analysis to traditional deep learning models(CNNs and LSTMs) and for these models, LOO(or some variant of it) is the only black box technique that is used for calculating feature importance. It would indeed be interesting to compare to frequency based measures, but we think the benchmark then would not remain fair, as our technique uses model predictions which are more informative than a simple frequency-based measure. Another point regarding experimentation is that we conduct experiments on three datasets(two binary, one multi-class) and different architectures.\\n\\n3. Indeed the writing for this section is poor and we apologize for this, it has been improved in the updated version. Zeros are appended to a word to calculate its score because our models(CNN/LSTM) operate on fixed-width input, 200 words in our case. So to get inference on one word we construct an artificial input which consists of a single word.\\n\\n4. Particular comments about writing, provided in the reviews, have been quite helpful, thank you for the review!\\n\\n5. We conduct all experiments with CNNs and LSTMs. The architecture for every experiment has been mentioned. We have added learning rate and optimizer details in the updated version. Early stopping was used to recognize a good stopping point.\\n\\n6. Indeed 5000 words(training vocabulary size in our experiments) would not be covered by just 25 reviews. We are claiming that the LOO approach would require 5000 model evaluations for attacking 25 reviews having 200 words each, because this technique measures importance by deleting individual words and thus analysis has to be repeated for each word when it appears in a new input sample(a different movie review, for example). On the other hand, our technique requires 5000 model queries initially, but nothing further.\\n\\n7. These parameters were chosen at random. The test examples we use throughout our experiments are a random subset of test data. The 10 neighbour choice was borrowed from (Yang et al., 2018).\\n\\n8. Our main result is the close alignement between success rates of our techniques and traditional approaches. However, some examples would have been interesting to add, which we have missed. \\n\\n9. AG news is a multiclass dataset(4 classes in all) so we used accuracy for reporting its result. For the other two binary datasets, we show accuracy for Yelp and AUC scores for IMDB because we think that AUC is an interesting metric for attacks. It incorporates the changes in prediction even when an attack isn't successful but the confidence score of the classifier decreases nonetheless.\\n\\nOverall, we have described all the components of our experiments, and verified our results on multiple datasets.We have compared to two baseline attack algorithms, greedy and delete one.\\nAdditionally, we show the use of these scores to gain some insight into the working of a trained model, such as finding important words for a class and the word importance distribution difference between different classes . We also show that the high correlation of these global scores from a CNN and an LSTM explains the transferability of adversarial attacks.\"}",
"{\"title\": \"Improved version uploaded\", \"comment\": \"Thank you for your review. An updated version has been uploaded, with particular focus on the algorithm.\"}",
"{\"title\": \"The paper should be improved before submission\", \"review\": \"The paper is poorly written. This is especially the case for the\\nproposed algorithm (the core contribution). This section is very\\ndifficult to understand, and notations are awkward. Everything is a\\nbit messy. However, the experimental results are quite well presented\\n(to be compared with the beginning of the paper).\", \"rating\": \"2: Strong rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"This paper provides a simple approach, but this paper lacks completeness.\", \"review\": \"Summary:\\nThis paper proposes WordsWorth score (WW score), a score to represent the importance of the word obtained from the trained model. Then, the score is applied to the greedy attack proposed by (Yang et al., 2018). In detail, the greedy attack first tries to search for the most important $k$ words in a text, and then it searches for values to replace the selected $k$ words. This paper uses the WW score to select the $k$ words in the first step.\\n\\nStrong points\\n+ A simple but effective approach to utilize for the greedy attack\", \"concerns\": [\"The main concern of this paper is that a minor contribution to the current knowledge. Despite the paper stating that this paper is based on the greedy attack (Yang et al., 2018), the contribution of this paper is limited to calculate the word score from the trained classifier and applied it to the greedy attack.\", \"Another concern about the paper is lack of rigorous experimentation to study the usefulness of the proposed method. This paper does not compare with other score-based approaches. That is, it was not even compared to the tf-idf based score approach.\", \"The writing should be largely improved. Section 4 is the main part of this paper. In Step 1, this paper represents a word as $d$-dimensional vector. Why does this paper append the zeros in front of the word representation? Does it mean the one-hot vector? If not, some studies or discussions about this representation should be included. In Step 2, Equations are hard to follow, and some are incorrectly written (e.g. case equation and definition of the D\\u2019).\", \"On the same note, readability and completeness of this paper do not meet the standard of conference. The reviewer suggests the authors to review the paper several times before submission.\", \"This paper states that $F$ is the trained classifier. However, there are no explanations on how to train or what kinds of classifiers were used.\", \"In Section 5.1.2, this paper states that it covers 5000 vocabularies freely after 25 reviews are processed because reviews are written in 200 words on average. It is definitely incorrect. Because all words in the review are not unique, it takes a longer timer to cache all words in the 5000 vocabularies. Although the proposed method can speed up by looking up the cached score, the performance of the proposed method is lower than the one of the original greedy approach. Some ablation studies or discussions about the relationship between speed and performance would have been useful to understand this.\", \"Some parameters or data are heuristically selected such as selecting 10 nearest neighbors in step 2, picking 300 examples from test data in IMDB review experiments, and so on. Some form of ablation studies about the parameters would provide appropriateness to readers.\", \"Furthermore, it would be better to show examples of successful attacks with WW scores.\", \"In experiments, the AUC score is used for IMDB evaluation and the accuracy is used for Yelp and AG news. Are there any reasons to use different evaluation measures?\"], \"minor_comments\": \"1. It would be better to write some constants such as $k$ in Section 3 as an italic character.\\n2. It will help readers if the authors can explicitly specify which side of the figure, left or right, is explained when the authors are referring to the figure in text.\", \"some_typos\": \"1. In Section 5.1.1, greedyww -> greedy\\\\_ww\\n2. In Section 5.4.2, figure ?? -> Figure 1\\n3. In Section 5.4.1, $greedy_ww$ -> greedy\\\\_ww\\n4. In Section 5.4.3, figure ??\\n5. In Section 6.1, CNN 4 -> CNN at Figure 4\\n...\", \"rating\": \"3: Clear rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"An interesting idea but a paper clearly not ready for publication\", \"review\": [\"This paper proposes a new and simple way to determine word importance for black box adversarial attacks on text classification models. Instead of using example-specific measures of importance like recent work (typically expensive to compute), the authors propose to feed individual words from the vocabulary to a trained model and use the model confidences to get global, class-specific importance scores.\", \"While being an interesting paper, at a high level I am concerned about several points:\", \"The paper is unpolished and at times hard to follow. I do not consider it ready for publication at this stage.\", \"There are many easy to implement baselines (feature selection has a rich history) that would have been very interesting to study. Are the WW scores capturing anything that simpler statistical methods do not?\", \"Many implementation details are lacking, which could be an issue for reproducibility. For example, the CNN model details are unclear (number of layers, filter sizes, embedding initialization, etc.). Learning rates and other hyperparameters are not mentioned.\", \"Why is the vocabulary size only 5000? Why are experiments run on only 500 examples, and are these examples selected?\", \"The claim of comparable performance seems slightly exaggerated, as the WW scores perform consistently worse than the baselines. In some cases, the difference seems around 0.1 / 0.15 absolute AUC. Also, providing raw scores (either all scores in Appendix or a subset of the most interesting case in the text) would help readers to quantify these differences.\", \"Some of the claims seem overly broad. For determining, say, grammaticality, I would expect greedy to vastly outperform greedy_ww.\", \"It is unclear to me how WW scores help with network interpretation. Again I would expect WW scores to correlate with a number of statistical correlation measures.\", \"There are many formatting issues, with figure numbers, references, equations, etc.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
xfNotLXwtQb | Inductive Collaborative Filtering via Relation Graph Learning | [
"Qitian Wu",
"Hengrui Zhang",
"Xiaofeng Gao",
"Hongyuan Zha"
] | Collaborative filtering has shown great power in predicting potential user-item ratings by factorizing an observed user-item rating matrix into products of two sets of latent factors. However, the user-specific latent factors can only be learned in transductive setting and a model trained on existing users cannot adapt to new users without retraining the model. In this paper, we propose an inductive collaborative filtering framework that learns a hidden relational graph among users from the rating matrix. We first consider a base matrix factorization model trained on one group of users' ratings and devise a relation inference model that estimates their underlying relations (as dense weighted graphs) to other users with respect to historical rating patterns. The relational graphs enable attentive message passing from users to users in the latent space and are updated in end-to-end manner. The key advantage of our model is the capability for inductively computing user-specific representations using no feature, with good scalability and superior expressiveness compared to other feature-driven inductive models. Extensive experiments demonstrate that our model achieves state-of-the-art performance for inductive learning on several matrix completion benchmarks, provides very close performance to transductive models when given many training ratings and exceeds them significantly on cold-start users. | [
"collaborative filtering",
"matrix completion",
"inductive learning",
"relation learning",
"recommender systems"
] | Reject | https://openreview.net/pdf?id=xfNotLXwtQb | https://openreview.net/forum?id=xfNotLXwtQb | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"jvYxzq3Socw",
"2eErMbJhJY_",
"aKVj7kTWYOI",
"EmOzi2EwS8",
"Wf1383bWMbo",
"PH6GdhXhJ0P",
"i3MMD7Jk74X",
"IJedtHABwCD",
"q_3yh7OE3zs"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040501700,
1605536190799,
1605535879552,
1605535661726,
1605535010391,
1604641130350,
1604415867523,
1603870234269,
1603631822523
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3459/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3459/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3459/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3459/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3459/AnonReviewer5"
],
[
"ICLR.cc/2021/Conference/Paper3459/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3459/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3459/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"The paper is somewhat borderline, though reviews mostly lean positive. Unfortunately after calibrating compared to other submissions, the work remains somewhat below the bar compared to higher-scoring papers.\\n\\nThe reviewers praise the topic, the method, and the experiments (although some of this praise is a little mixed or lukewarm). The most negative review raises several specific concerns about the evaluation methodology, as well as some concerns about data leaks etc. While serious, the authors rebuttal to these claims seems reasonably convincing. While the remaining issues appear not to be dealbreakers, there are nevertheless some lingering concerns which ultimately put the paper slightly below the bar.\\n\\nThe AC notes that their initial inclination was to accept this paper, though it was suggested that the score be lowered after calibration compared to other submissions, mainly due to doubt regarding these lingering issues.\"}",
"{\"title\": \"We add a thorough discussion on differences to related works in Section 2\", \"comment\": \"Thanks for your comments and review. On the methodological side, the novelty of our model lies in jointly learning a global graph over users and user representations based on attention weights in the graph. Such an idea brings up several advantages over previous works. 1) Compared with inductive matrix completion with features or with item-based embedding (like FISM [1]), our model maintains superior capacity using both learnable parameters in user and item space. 2) Compared with inductive matrix completion with local-graph structures [2], our model possesses better expressiveness and scalability (justified in Section 3.2 and our experiments). 3) Compared with transductive one-hot embedding-based CF models, our model achieves inductive learning and does not sacrifice any capacity (justified by Thm 1 and our experiments). In short, IRCF, as a new inductive matrix completion model, unifies the advantages of other methods and overcomes their limitations in one general CF framework.\", \"q1\": \"Related works section\\n\\nWe add several paragraphs in Section 2 to discuss relationships and differences to other papers (including general CF, feature-driven models, inductive matrix completion, item-based models). Also, we add a figure in Appendix A as comparison with other works.\", \"q2\": \"How to handle user bias term\\n\\nFor query users (whose training ratings used for learning inductive models), we learn the b_u together with model optimization. For new users (unseen by our model during training), we use averaged values of other users\\u2019 bias as an estimation.\", \"q3\": \"Unnormalized attention scores\\n\\nWe tried using an unnormalized version in our experiments before, and the performance is not good compared to the normalized scores. Also, we empirically found that normalized scores can help to stabilize the training and avoid mode collapse on a small group of support users\", \"references\": \"[1] Kabbur et al., FISM: factored item similarity models for top-n recommender systems. In KDD 2013.\\n\\n[2] Zhang et al., Inductive Matrix Completion Based on Graph Neural Networks. In ICLR. 2020.\"}",
"{\"title\": \"The novelty of our model lies in jointly learning graph structures and node representations to enable inductively computing user embeddings\", \"comment\": \"We thank for your review and comments. Here is a response to the questions.\", \"q1\": \"Comparison with IGMC\\n\\nWe need to point out that the rationale of IGMC is totally different from our model, though both consider inductive matrix completion without features and use GNNs as tools. IGMC extracts a local subgraph of 1-hop neighbors for a user-item pair and uses GNNs to encode such subgraph to get predicted rating value. Note that such subgraphs only contain structure information without user and item indices, and this is why IGMC can achieve inductive learning (if incorporating user index, the model has to learn one-hot user embedding/representation and become limited in transductive learning). By contrast, our IRCF first learns matrix factorization for users and extends user embeddings for new users via message passing over estimated hidden relational graphs. In short, there are two main differences between two models. 1) IGMC cannot produce user embeddings/representations while IRCF maintains such ability in inductive learning. The user embeddings are important for user profile representation and many downstream tasks, like target advertisement, user-controllable recommendation, influence maximization, etc. Also, IRCF as an embedding-based model possesses superior expressiveness than local-graph-based model (IGMC), as shown in Section 3.2 and our experiments. 2) IGMC focuses on subgraph structures in an observed bipartite graph, while IRCF considers learning a hidden global graph over users. The joint learning of graph learning and graph representation can address noisy and incomplete information (resulted from user exposure bias) in observed user-item ratings.\", \"q2\": \"more discussions for Thm 1\\n\\nSince matrix factorization assumes a learnable embedding vector for each user, such one-hot embeddings provide maximized capacity for learning user\\u2019s preferences compared to shared embeddings in other models (e.g., feature-driven model with common feature space or item-based models with common item space). Thm 1 indicates that IRCF can minimize the loss to the same level as standard matrix factorization under one mild condition, which shows that our proposed inductive model does not sacrifice any model capacity.\", \"q3\": \"more discussions for Thm 2\\n\\nB and H can be treated as constants and have small effects. B denotes the bound for user\\u2019s rating on items, determined by specific datasets, e.g. B=5 in movielens. H is determined by sparsity of the hidden relational graph. In most cases, we found the hidden graph is very sparse (as shown in Fig. 4(c)) so H is not a large value in practice. M2 is the number of query users. The generalization error bound becomes tighter with fewer query users, in which case the model tends to focus more on specific users in the context of collaborative learning among query users.\", \"q4\": \"different metrics\\n\\nWe agree that using the same metric would be more convincing. In Table 1 and 2, we follow the common benchmarks used in NNMF, GCMC, IGMC, F-EAE, using RMSE as metrics in Movielens and Douban datasets. In Table 3 for experiments on new users, we follow the strong competitor MeLU in cold-start recommendation, directly use its provided datasets (a feature-augmented version for ML-1M) and follow the evaluation protocol in its paper using MAE as metric. We update the results of RMSE for Table 3 experiment in the paper. The results are consistent with MAE. IRCF achieves 0.9367 and exceeds MeLU with 0.9625 over a large margin.\"}",
"{\"title\": \"Clarifications for misunderstandings and more discussions\", \"comment\": \"Thanks for your comments. We need to point out that there are two types of situations widely considered in recommender system area: 1) explicit feedback dataset with user-item rating information (e.g. 1,2,3,4,5); 2) implicit feedback dataset with user-item click (or view/like) information (e.g. 0/1). In the first case the recommendation task is usually formalized as a matrix completion problem [4-7], and RMSE as performance metric is a common practice in this setting. In the second case the top-N recommendation setting is widely used for evaluation. Our paper mainly focus on the first case, so we use RMSE as evaluation metric in Douban and Movielens datasets (with explicit feedbacks), following our competitors [4-7]. Also, we agree that the top-N recommendation is a very important and realistic problem, and we leave it as a focus of future works. In our experiments in Amazon dataset (with implicit feedbacks), we can also use top-N metrics (see Q4 in the following).\", \"q1\": \"Incremental learning for CF models\\n\\nWe change the statement \\u201chas to retrain the whole/entire model\\u201d to \\u201chas to retrain the model\\u201d which can be more precise. In fact, the incremental learning is a decent choice for new users in online systems. However, it also requires model learning for each new user (gradient descent for neural models or computing matrix inversion for linear systems with ALS algorithm), limiting the efficiency in inference. Also, such incremental learning is designed independently for each user, which would be more prone for over-fitting than collaborative learning among users in CF models. We experiment incremental learning on ML-100K/ML-1M, and get RMSE 1.014/0.9608 on query users. Our IRCF gives RMSE 0.981/0.944 (in Table 1), which are significantly better.\", \"q2\": \"Comparison with item-based models\\n\\nWe add discussions of relationships and differences to related works (including item-based models [1][2], VAE models [3]) in Section 2. Admittedly, such item-based and VAE models can inductively compute user embeddings without the need to retrain a model for new users. However, they have very limited capacity for learning user preferences, since they only consider learnable parameters in item space. By contrast, IRCF considers both learnable parameters in user and item space, with enough capacity as equivalent as general matrix factorization (which gives state-of-the-art performance on matrix completion). In fact, these methods are not used as baselines in the experiments of our main competitors NNMF, GCMC, IGMC, F-EAE. As further demonstration, we implemented [1], [2], [3] and get test RMSEs 2.920/2.090 for [1], 2.276/1.911 for [2], 2.981/2.861 for [3] on new users in ML-100K/ML-1M. Our IRCF gives test RMSEs 0.999/0.956, which exceed them by a large margin.\", \"q3\": \"Test data leak issue\\n\\nThis is definitely a misunderstanding for our evaluation. We strictly follow training/validation/test split for evaluation. In fact, as indicated in our experiment setup (1st paragraph in Section 4 and 1st paragraph in 4.1), we split training/test ratings for all the users in one dataset. Then we use the number of training ratings for each user to split users into two sets $ \\\\overline {\\\\mathcal{U}}_1$ and $ \\\\overline {\\\\mathcal{U}}_2$. We consider two situations: 1) use $ \\\\overline {\\\\mathcal{U}}_1$ as support users and $ \\\\overline {\\\\mathcal{U}}_2$ as query users; 2) use $ \\\\overline {\\\\mathcal{U}}_1$ as both support and query users. In both cases, we use training ratings of support (resp. query) users to train our transductive (resp. inductive) model. In the first case, we report RMSEs for test ratings for all (resp. query) users which corresponds to All (resp. Query) in Table 1. In the second case, we report RMSE for test ratings of users in $ \\\\overline {\\\\mathcal{U}}_2$ (new users unseen by the model) which corresponds to New in Table 1. Our experiments provide a fair comparison with other baselines in various situations.\", \"q4\": \"Evaluation metrics\\n\\nIn Amazon dataset, if we consider Recall@3/Recall@10 , we get 0.251/0.670 for IRCF-GC, 0.243/0.650 for IRCF-NN, 0.226/0.590 for NNMF, 0.223/0.588 for GCMC. The results are consistent with AUC results in Table 2 and our IRCF achieves the best performance.\", \"references\": \"[1] Cremonesi et al. \\\"Performance of recommender algorithms on top-n recommendation tasks.\\\" In Proceedings of the fourth ACM conference on Recommender systems, pp. 39-46. ACM, 2010\\n\\n[2] Kabbur et al. \\\"FISM: factored item similarity models for top-n\\nrecommender systems.\\\" In KDD, 2013.\\n\\n[3] Liang et al., \\\"Variational autoencoders for collaborative filtering.\\\" In WWW 2018.\\n\\n[4] Dziugaite et al., \\u201cNeural Network Matrix Factorization.\\u201d CoRR, abs/1511.06443. 2015.\\n\\n[5] Berg et al., \\u201cGraph Convolutional Matrix Completion.\\u201d CoRR, abs/1706.02263. 2017.\\n\\n[6] Zhang et al., \\u201cInductive Matrix Completion Based on Graph Neural Networks.\\u201d In ICLR 2020.\\n\\n[7] Hartford et al., \\u201cDeep Models of Interactions Across Sets.\\u201d In ICML 2018.\"}",
"{\"title\": \"We add discussion for motivations and differences to inductive graph representation learning\", \"comment\": \"Thanks for your review and comments. We address the proposed issues in order.\", \"q1\": \"Motivations of our model\\n\\nThe motivation lies in three-folds\\n1) Conceptually, in many recommender systems, user behaviors and preferences share a lot of proximity. For one user, his/her behaviors would be impacted by a group of other users. Based on these observations, we can model user\\u2019s behaviors/preferences via combination of other users\\u2019.\\n2) Mathematically, in matrix factorization framework, user\\u2019s embedding can be expressed as a weighted combination of base vectors which span the d-dimensional latent space. If support users\\u2019 embeddings are full-column-rank (as in thm 1), we can leverage them as base vectors to express arbitrary embeddings for query users.\\n3) From the perspective of graph representation learning, the observed graph often contains noisy links or missing important links. In our case, if we directly use the bipartite graph of user-item ratings to define a user-user graph, the graph would be pretty sparse especially for new users and miss potential links (ratings) due to user exposure bias. Hence, we turn to jointly learning graph structures (as attention scores) and node representations through graph attentive convolution, which brings up better accuracy and flexibility.\", \"q2\": \"Difference to inductive graph representation works\\n\\nIRCF jointly estimate neighbored nodes for a target node (given by attention scores) and learn node representations based on that, while inductive graph representation models often assume a given observed graph and directly learn node representations. Furthermore, IRCF can deal with nodes with new users with no historical edge, while the latter would fail for new nodes with no observed edge (if without node attribute features). We update this discussion after eqn 5/6 in our paper. Also, we summarize differences to other related works in Section 2 and add a figure in Appendix A for conceptual illustration.\", \"q3\": \"Experimental evaluation\\n\\nWe adopt RMSE as metrics in most of our experiments since most of baseline methods in our paper (NNMF, GCMC, IGMC, F-EAE, etc.) consider RMSE as the only metric in douban, ML-100k, ML_1M, ML-10M datasets. We use the same benchmarks to calibrate with them. Our paper focus on CF framework for general matrix completion task. It would be interesting to extend our method to ranking-based top-N recommendation as future work. Also, in Amazon dataset with implicit feedbacks, we can also consider top-N metrics for evaluation. For example, if we consider Recall@3/Recall@10 , we get 0.251/0.670 for IRCF-GC, 0.243/0.650 for IRCF-NN, 0.226/0.590 for NNMF, 0.223/0.588 for GCMC. The results are consistent with AUC results in Table 2 and our IRCF achieves the best performance.\", \"q4\": \"Temporal dynamics for users\\n\\nIt is a misunderstanding for our evaluation. We do not assume temporal dynamics on user side. For different rated items of a user, say $(u, i_1, r_1, t_1)$ and $(u, i_2, r_2, t_2)$ where $t_1$, $t_2$ denote different time when the rating behavior happens. We drop the timestamps and assume the same user historical set $\\\\mathcal I_u$ for both rating instance. In other words, we do not consider time information in our model and experiments, and assume all the ratings with no temporal order. Such settings are widely adopted in our comparative methods (e.g., GCMC, IGMC) and other papers for general recommendation.\"}",
"{\"title\": \"Interesting work with some caveats\", \"review\": \"##########################################################################\", \"summary\": \"This work proposed an inductive recommendation framework on user-item relation graphs. Such a framework relies on the user-item relations without the requirement of side-information and perceives certain flexibility in terms of the parametrization for user/item representations. The authors also provided theoretical analysis to highlight some mathematical insights out of this framework. The proposed method is evaluated on three real-world datasets and compared with several baselines.\\n\\nOverall I find the work was well-reasoned and executed in a relatively good shape, thus recommending acceptance.\\n\\n\\n##########################################################################\", \"strength\": [\"Relevant topic to the ICRL community and could have potential impact in real-world applications\", \"The proposed method is well reasoned and technically sound\", \"Experiments are executed in a decent shape\", \"##########################################################################\"], \"weakness\": [\"Motivations behind its technical contributions can be further sharpened; comparisons to previous related studies on the inductive graph learning domain can be further improved\", \"Some gaps between the current experiment setup and real-world recommendation senarios\", \"##########################################################################\"], \"detailed_comments\": \"I'll address the above potential weakness in details here.\\n\\nI personally find a bit difficult to digest the motivations of this work and how it differentiated from previous inductive graph learning work until diving into its detailed parametrizations. Fig. 1 and its descriptions are helpful in terms of illustrating the inductive setting, but not quite informative in terms of concrete contributions of this work conceptually.\\nMy takeaway from the proposed framework is, the attentive pooling method falls into the aggregator family of inductive graph learning, despite that the aggregation and sampling scheme are performed on user side globally instead of on the user-item local neighborhoods. In this regard, it may also be helpful to highlight the (mathematical) difference between this work and existing inductive graph learning (e.g. pinSage) after eq.5/6.\\n\\nAlthough the experimentations are executed in a good shape, there are still some gaps between the current setup and real-world recommendation requirements. \\n- The proposed method is largely evaluated on the rating prediction setting, AUC is reported on the amazon dataset but no Top-K ranking metrics are performed during the experiments. It is acceptable given these metrics are consistent with the optimization objective, however, the notable gap between pointwise prediction setting and the real-world online top-K ranking setting needs to be called out.\\n- Another concern about the current evaluation protocol is, it enforces the temporal dynamics on the user side and assumes item representations remains the same - again it is consistent with the proposed method (i.e., Q remains the same) thus expected to favor it. The question is, whether these assumptions are consistent with real-world senarios. As far as I know, both movieLens and Amazon datasets have associated timestamps, what the real temporal dynamics here and what would be the warm/cold item/user distribution look like if splitting data chronologically?\", \"minor_concerns\": [\"Annotations in Figure 4 can be further enlarged for visibility\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"vote to reject\", \"review\": \"Summary:\\nThe work proposes a relational learning scheme that extends standard collaborative filtering approach and aims at improving recommendations quality for new users. The proposed method extracts relations from historical rating data within a preselected subset of support users and utilizes this knowledge to better represent newly introduced (query) users.\", \"reasons_for_score\": \"The paper misses comparison with a large set of competitive techniques based on incremental learning approach and provides no justification for this omission. Evaluation methodology seem to have test data leak, which may be the major source of performance gains instead of the model itself.\", \"pros\": \"The authors present an interesting view on relational learning problem within the collaborative filtering setting. Generating recommendations online in an instant fashion for both known and new users is indeed a very relevant problem of a great practical importance. The proposed relational learning-based modification of standard collaborative filtering schemes is described clearly and incorporates novel ideas. The authors also prove two theorems, which further support feasibility of their approach and provide some hints on the expected behavior.\", \"cons\": \"Both in the abstract and in the text, the authors state that standard CF techniques are unable to deal with new users without the need to retrain the entire model. This statement is wrong. For example, in the case of warm start scenario with matrix factorization techniques one can easily update the model specifically for a newly introduced user by performing a few update steps using only ratings of this user. In the case of SGD this would be just a few \\\"half-steps\\\" of gradient descent with fixed matrix of item embeddings. This can be generalized to neural-networks based approaches. Similarly, for the ALS algorithm (e.g., [1]) it would require \\\"half-step\\\" of solving a linear system w.r.t. new user embedding (see eq. 4 in [1]), which can be performed efficiently and in fact is one of the standard approaches in many production systems.\\nFurthermore, this can be even reduced to an analytical solution in the case of PureSVD approach [2]: it only requires learning item embeddings, and the user embeddings are simply represented as a weighted sum of the embeddings of items they interacted with (see eq. 6 in [2]). It's similar to the way the d_u variable is defined in this paper in eq. 4. Finally, a natural generalization of the latter representation would be an autoencoder, e.g. MultVAE [3] or RecVAE [4], which naturally resolves the warm-start scenario without the need for any modifications as there's no need for a separate user representation.\\nTherefore, in order to make comparison complete I would suggest to include some incremental learning techniques as well as autoencoder solutions and clearly demonstrate how the proposed method compares to them in terms of recommendation quality, flexibility, and computational efficiency.\\n\\nThe second major point here is the evaluation methodology. In eq. 4, matrices W_q, W_k represent trainable parameters. So how are they trained? Equation 7 explicitly states that training is done on historical data from query users (matrix R_2) and no other data splits are present. If that's the case, than there's a test data leak: historical rating data of query users is used to train parameters of the model and then the same users are used to evaluate the performance of the model. It doesn't correspond to the warm start scenario. The weights W_q, W_k must be fixed first (after they were trained) and then used to generate representations of users that were never shown to the model before with eq. 6. Hence, there should be at least two disjoint subsets of query users: one for validation and another one for actual testing of the model. Unfortunately, I couldn't find any hints on such a splitting neither in the main text, nor in the Appendix, which makes me believe there's no such splitting. Avoiding test data leaks is absolutely crucial for a fair comparison.\", \"minor_comments\": \"The problem of generating recommendations is not the same as the problem of rating prediction/matrix completion. It's important to keep this distinction in mind. The standard task for recommender systems is generating an ordered list of relevant items. The quality of this cannot be measured with RMSE. In fact, there's a strong evidence that models that perform well in terms of RMSE metric may not be good at all in terms of more appropriate metrics like precision, recall, nDCG, MAP, MRR, etc. Please, consider adding more appropriate metrics into the work.\\nAlso note that AUC is not the best choice for that matter as it makes no distinction between proper ranking of irrelevant items and proper ranking of relevant items. Considering that the majority of items in recommendations are typically irrelevant, high AUC scores may not reliably represent actual performance of algorithms in their ability to generate lists of relevant items.\", \"references\": \"[1] Hu, Yifan, Yehuda Koren, and Chris Volinsky. \\\"Collaborative filtering for implicit feedback datasets.\\\" In 2008 Eighth IEEE International Conference on Data Mining, pp. 263-272. Ieee, 2008. \\n[2] Cremonesi, Paolo, Yehuda Koren, and Roberto Turrin. \\\"Performance of recommender algorithms on top-n recommendation tasks.\\\" In Proceedings of the fourth ACM conference on Recommender systems, pp. 39-46. ACM, 2010. \\n[3] Liang, Dawen, Rahul G. Krishnan, Matthew D. Hoffman, and Tony Jebara. \\\"Variational autoencoders for collaborative filtering.\\\" In Proceedings of the 2018 World Wide Web Conference, pp. 689-698. 2018. \\n[4] Shenbin, Ilya, Anton Alekseev, Elena Tutubalina, Valentin Malykh, and Sergey I. Nikolenko. \\\"RecVAE: A New Variational Autoencoder for Top-N Recommendations with Implicit Feedback.\\\" In Proceedings of the 13th International Conference on Web Search and Data Mining, pp. 528-536. 2020.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"An interesting paper with limited novelty and solid technical solution\", \"review\": \"### Quick summary\\nThis work explores a popular problem, i.e., collaborative filtering, in an inductive setting, which is very important for real-world recommender systems. To address the challenges in the inductive settings, i.e., learning accurate representations for users who do not occur in the training data, the authors propose to construct a relational graph between users in the training data and new users based on a standard matrix factorization model and then use an attentive message passing framework to inductively compute user-specific representations. Besides, the authors prove the expressive and generalization capabilities of the proposed framework. Extensive experiments are conducted to demonstrate the effectiveness of the proposed framework both in transductive and inductive settings, as well as the scalability.\\n\\n### Clarity\\nThe presentation of the paper is good.\\n\\n### Originality\\nGenerally speaking, the inductive collaborative filtering is of limited novelty, while the technical solution is novel and solid, especially the part in constructing a relational graph between support users and query users.\\n\\n### Pros\\n1. The technical solution is interesting and solid, with a clear presentation in the paper.\\n2. The proofs of Theorem 1&2 are interesting, which theoretically shows the expressiveness and generalization abilities of the proposed model.\\n3. The experiments are extensive, most of which are convincing.\\n4. The presentation of the paper is good.\\n\\n### Cons\\n1. The major concern is that the novelty of inductive collaborative filtering with GNN is limited since Zhang & Chen 2020 [1] proposed the IGMC framework, which has done a comprehensive study on the inductive CF problem. Though the authors point out the difference between the proposed IRCF and Zhang's work, they do not give adequate materials to support their arguments. For example, the mentioned disadvantage of IGMC is that *the subgraphs in IGMC are ignorant of user and items indices*, however, from the perspective of the author, this issue is not that important, and may not occur very frequently, and can be trivially addressed by incorporating the user and item indices into IGMC. It will be more convincing if the authors can give more supporting materials in the paper.\\n2. For Theorem 1, the authors hold one argument that matrix factorization gives maximized capacity for learning personalized user preferences from historical rating patterns, however, it does not make sense to the reviewer. Can you provide any references or explain a bit more? Besides, what are the implications of Theorem 1 in helping us understanding the proposed IRCF ?\\n3. For Theorem 2, the authors only discuss the influences of the size of $\\\\mathcal{U}_1$. How about other variables, e.g., $B, H, M_2$, etc. \\n4. In the performance comparisons, the authors use RMSE in Table 1 and 2, while MAE in Table 3. This seems weird to the reviewers. Why do not you adopt the same metric, say either RMSE or MAE, since the experiments are actually the same type.\\n\\nGenerally speaking, the paper is of high quality. The idea is clear, and the technical solution is interesting and solid with a theoretical guarantee. Most experimental results are convincing. Besides, the writing of the paper is clear and easy to understand.\\n\\n------\\n### Post rebuttal\\nGreat thanks to the authors for the detailed replies. After reading them, I decided to keep my rating.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"This is a borderline paper and slightly above the threshold\", \"review\": \"This paper proposed an inductive collaborative filtering method, called IRCF. The goal is to possess expressiveness (against feature-driven methods) as well as generalization (against one-hot encoding based methods). In IRCF, there are a matrix factorization model for support users and a relation model for query users. The former is trained with transductive learning to obtain support users embeddings and item embeddings. The relation model then generates query user embeddings as weighted sum of support user embeddings by examining relational graph between support and query users.\", \"pros\": \"1. The paper is well-written and easy to follow. \\n2. The experimental results are satisfying. \\n3. The Theorem 1 and Theorem 2 reflect the tradeoff between capacity and generalization, which can guide the way of selecting support users. \\n4. The idea of using a set of pretrained embeddings as bases may be generalized to other inductive tasks.\\n\\nCons/Questions:\\n\\n1. A detailed related work section is expected. There have been many works studying inductive recommendation problems w/ or w/o user features. As far as I known, lots of methods like FISM generate user embeddings by aggregating embeddings of historical items, which naturally support inductive learning. In this paper, the proposed method IRCF views query user embeddings as weighed sum of support user embeddings, but the weights are still based on aggregating embeddings of historical items (i.e., d_u' in Eq. 4). Moreover, the support users are analogous to a set of bases, and each user can be represented by a combination of the bases. Thus, it is hard to assess the novelty without a related work section.\\n2. It is unclear on how to handle user bias terms b_u in Eq. 19 and Eq. 22 for query users or new users.\\n3. It seems that you assume C is a conical combination coefficients in Eq.4. Why not to use the unnormalized scores in Eq. 4, which matches the Theorem 1 better?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
BIwkgTsSp_8 | Learning to Noise: Application-Agnostic Data Sharing with Local Differential Privacy | [
"Alex Mansbridge",
"Gregory Barbour",
"Davide Piras",
"Christopher Frye",
"Ilya Feige",
"David Barber"
] | In recent years, the collection and sharing of individuals’ private data has become commonplace in many industries. Local differential privacy (LDP) is a rigorous approach which uses a randomized algorithm to preserve privacy even from the database administrator, unlike the more standard central differential privacy. For LDP, when applying noise directly to high-dimensional data, the level of noise required all but entirely destroys data utility. In this paper we introduce a novel, application-agnostic privatization mechanism that leverages representation learning to overcome the prohibitive noise requirements of direct methods, while maintaining the strict guarantees of LDP. We further demonstrate that data privatized with this mechanism can be used to train machine learning algorithms. Applications of this model include private data collection, private novel-class classification, and the augmentation of clean datasets with additional privatized features. We achieve significant gains in performance on downstream classification tasks relative to benchmarks that noise the data directly, which are state-of-the-art in the context of application-agnostic LDP mechanisms for high-dimensional data sharing tasks. | [
"Differential Privacy",
"Representation Learning",
"Variational Inference",
"Generative Modelling"
] | Reject | https://openreview.net/pdf?id=BIwkgTsSp_8 | https://openreview.net/forum?id=BIwkgTsSp_8 | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"vUB_4r85oy",
"K3wVhp5Zjzu",
"HoNE9YA8Pa6",
"HC2Qcy0YqAq",
"1L3Zu9UFt6m",
"O862yo4ae-w",
"HW9iYOc5dFP",
"DmV3pbA-t2",
"VNRXBBtppFG",
"f0J47wsypJN",
"MVl-mSVeuBm",
"wYQfuiwrdPP",
"f_mbRtHTzik",
"8Z9gRfIuMF_"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040395439,
1606244454627,
1606160547835,
1606144683874,
1606112353455,
1605379386467,
1605378975280,
1605378702852,
1605378319981,
1605376302554,
1604018204540,
1603946333085,
1603795109566,
1603756771463
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3455/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3455/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3455/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3455/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3455/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3455/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3455/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3455/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3455/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3455/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3455/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3455/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3455/AnonReviewer4"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"The paper considers the problem of private data sharing under local differential privacy.\\n\\n(1) it assumes having access to a public unlabeled dataset for learning a VAE, so it reduces the dimensionality in a more meaningful way than simply running PCA. (2) the LDP guarantee is coming from the standard Laplace mechanism and Randomized Responses. (3) then the authors propose how to learn a model based on the privately released (encoded) data which exploits the knowledge of the noise distribution.\\n\\nNone of these components are new as far as I know, nor were they new in the context of differential privacy. For example, the use of a publicly available data for DP was considered in: \\n\\n- Amos Beimel, Kobbi Nissim, and Uri Stemmer. Private learning and sanitization: Pure\\nvs. approximate differential privacy. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, pages 363\\u2013378. Springer, 2013.\\n\\n(they called it Semi-Private Learning...)\\n\\n- Papernot, N., Abadi, M., Erlingsson, U., Goodfellow, I., & Talwar, K. (2017). Semi-supervised knowledge transfer for deep learning from private training data. In ICLR-17.\", \"the_idea_of_integrating_out_the_noise_by_leveraging_the_known_noise_structure_were_considered_in\": \"- Williams, O., & McSherry, F. (2010). Probabilistic inference and differential privacy. Advances in Neural Information Processing Systems, 23, 2451-2459.\\n\\n- Balle, B., & Wang, Y. X. (2018). Improving the Gaussian Mechanism for Differential Privacy: Analytical Calibration and Optimal Denoising. In International Conference on Machine Learning (pp. 394-403).\\n\\nAnd many subsequent work.\\n\\nThe contribution of this work is in combining these known pieces (without citing some of the earlier work) to achieve a reasonably strong set of experimental results (for LDP standard). I believe this is the first experimental study that uses VAE for the dimension reduction, however, this alone is not sufficient to carry the paper in my opinion; especially since the setting is now much easier, with access to a public dataset.\\n\\nThe reviewers question the experiments are baselines are usually not using a public dataset as well as the practicality of the proposed method. Also, connections to some of the existing work on private data release (a.k.a., private synthetic data generation) were note clarified. For these reasons, there were not sufficient support among the reviewers to push the paper through. \\n\\nThe authors are encouraged to revise the paper according to the suggestions and resubmit in the next appropriate venue.\"}",
"{\"title\": \"Benchmark claims updated; Clarification of data requirements;\", \"comment\": \"The VLM requires a clean, unlabelled training dataset that follows a similar distribution to the dataset one wishes to share under LDP. In many cases this would be a dataset that the organisation already has access to, rather than a public dataset. For example, it is highly likely that a public health body would have access to some kind of proprietary medical imaging dataset, or that a technology company would already have collected some data from a group of users (e.g. in jurisdictions where data privacy laws are weaker). In the scenario that sensitive data is used, one would use DP-Adam to train the VLM and protect the members of this clean dataset. Indeed, DP-Adam was used in all our experiments to reflect this.\\n\\nFurthermore, we have outlined several real-world applications throughout Sections 4 and 5, as well as two explicit examples in our latest response to AnonReviewer1, which we believe are compelling and particularly relevant to real-world scenarios (across healthcare, finance, and consumer industries).\\n\\nRegarding benchmarks, Ren et al. state: \\u201cour goal is to help the central server publish a synthetic dataset that has the approximate joint distribution of d attributes with local privacy\\u201d. We stress that synthetic dataset generation is an inherently different task, and such models cannot solve many of the problems described in this paper. \\n\\nFor example, the data join problem outlined in Section 4.3 requires that the privatized data be associated with specific individuals in order to be joined with another database. With LoPub, the data is a collection of synthetic random samples from a similar distribution, which do not represent individuals. The data join task is therefore not possible. Similarly, no evidence is provided in Ren et al. that LoPub is able to classify directly on its collected, privatized data, and so the experiments in Section 5 are also out of the scope of this model. Benchmarking against LoPub would require significant adaptations to their proposed model, which to the best of our understanding, would not be in line with the authors\\u2019 original motivation. That being said, we have updated the sentence specified to reflect the fact that our approach tackles problems in data sharing that are not possible with existing methods.\"}",
"{\"title\": \"More comments and concerns\", \"comment\": \"The authors have extended the literature review and addressed some of my concerns. However, it seems the applications of the proposed work are limited to the cases where the public data is available for learning a VAE. This assumption significantly limits the work for many real-world applications where such data is not available. The authors are expected to emphasize this in the paper including the abstract.\\n\\nAlso, I still believe the benchmarking should be improved at least by comparing to some LDP methods such as LoPub by Ren et al. If not possible, the following should be updated: \\\"We achieve significant gains in performance on downstream classification tasks relative to benchmarks that noise the data directly, which are state-of-the-art in the context of application-agnostic LDP mechanisms for high-dimensional data\\\"\"}",
"{\"title\": \"Dataset availability clarified; Applications explained and model comparisons justified\", \"comment\": \"__Point 4:__ This work is proposing a solution to the task of private data collection / sharing under LDP. In our experiments, we aim to demonstrate that the privatized data retain sufficient information for training ML algorithms; we do this by training a downstream classifier on the collected dataset but emphasise that we could use the privatized data for a range of other tasks. The goal of this paper is to solve the problem of private data sharing, which is not only applicable in situations where training a private classifier is not an option, but also provides a much greater flexibility to the data collector.\\n\\nTo concretely outline an application where data sharing is relevant (and training a private classifier would not be possible), consider the data join problem in Section 4.3. Suppose a tax authority wants to investigate an individual who has an account with a private bank. Transaction details from the bank may be useful in the investigation. Thus the private bank could train a VLM on their transaction dataset, and this could be used to privatize the transactions of the individual, before sending to the authorities. The authorities could then join these privatized features with the clean features they already have on the individual. Our experiments demonstrate that by joining private and clean features, we have more information about the individual than with only clean features.\\n\\nImportantly, in this example, the tax authority will want to do many things with this new joined data set. They may want to train a classifier, but they will also need to be able to audit their work with access to the data used, among many other things. Therefore, a privately trained classifier is not relevant for this application.\\n\\n__Point 8:__ As we understand, the reviewer is concerned that the availability of datasets used in our work is unrealistic. We address this point below, but please clarify if we have misunderstood the reviewer\\u2019s position.\", \"there_are_two_datasets_in_our_approach\": \"1. The pre-training dataset is a clean (i.e. not privatized) dataset used to train the VLM. If this is a sensitive dataset, and not publicly available, the organisation collecting the data would use DP-Adam to protect the members of this dataset.\\n\\n2. The second dataset is a sensitive dataset to be collected by the organisation. The VLM is given to the data owners, who privatize their data locally before sending it to the organisation. \\n\\nOur work crucially provides a framework that allows data owners (who are apprehensive about sharing sensitive data) to share this second dataset with an organisation, under LDP.\\n\\nThroughout Sections 4 and 5, we describe numerous contexts in which it is realistic that an organisation would have access to a pre-training set, as in our experiments. For clarity, we explicitly outline one application here, which relates to the experiment in Section 4.2. Suppose a public health body wishes to collect chest scans from different hospitals for patients with a novel disease. The health body would have access to historic chest scan datasets on which they could train a VLM. The VLM would then be sent to each hospital, and chest scans of patients with the novel disease would be privatized locally and sent back to the health body, forming a privatized dataset. This could then be used in many downstream tasks, one of which is to train a novel disease classifier. \\n\\nWe hope this example has sufficiently demonstrated one example of the type of situation in which our data assumptions are realistic. We also hope that this example reinforces our response to point 4, in highlighting another application of our work in which private classifiers cannot be used.\"}",
"{\"title\": \"A few more questions\", \"comment\": \"On Point 4: Although these methods are not designed for private data sharing, they tackle the same problem, private classification learning. When these methods can privately train classifiers well, why do we need to share private data?\", \"on_point_8\": \"1. Can you elaborate on what knowledge about the data distribution is essential for the training? For example, to what extent the similarity is the proposed method can work well? \\n2. It will be necessary that the pre-training dataset is really public. Extracting a sub-set from a so-called private dataset is not a convincing way. In practice, we are unable to split part of the private dataset for pre-training only given a private dataset.\\n3. I cannot agree that the method is practically better than the baseline if the public data are only used by the proposed method. The improper setting in experiments will fail to reveal the inherent reason why the method work.\"}",
"{\"title\": \"Literature review extended; benchmarks justified\", \"comment\": \"We thank the reviewer for their response, and have addressed each of their points in order below.\\n\\n**Major Point 1:** We have updated the introduction to include a more in depth discussion of the related work, including the papers cited in this review.\\n\\n**Major Point 2:** Thank you for pointing this out, we have updated this literature review to reflect the fact that not all work in synthetic generation requires labels for training, and more importantly to outline how our work differs from existing work in the literature.\\n\\nIn short, our method is solving a very different problem to the work on synthetic generation, and we believe synthetic generation to be limited in its applications. As mentioned in our response to AnonReviewer1 point 4, we learn an LDP mechanism for privatizing individual datapoints or subsets of features, for use in downstream tasks; this opens up possibilities to solve a much broader range of problems. Synthetic generation approaches create datasets that are not related to individuals, but are simply likely under the learnt probability distribution. One has no way of collecting / privatizing new data with these approaches, as is required for the tasks in Sections 4.1, 4.2 or 5. \\n\\n**Major Point 3:** In P3GM, it is suggested that training a VAE end to end with DP-SGD is challenging, and this is something we also found. We found that the simple 2-stage approach outlined at the end of Section 3.1 was sufficient for learning good representations. P3GM also adopts a technique whereby the training of the VAE is split into steps, however their approach seems more complicated than ours. It should be stated that we are attempting to solve a completely different problem to P3GM as described in our rebuttal to Major Point 2 above.\\n\\n**Major Point 4:** As discussed in our response to AnonReviewer1 point 4, we do not see an obvious way to benchmark against DP classifiers due the nature of the tasks solved by our model in this paper. The literature on local differential privacy for high dimensional data is still in its infancy, and much of the existing work is tailored to specific datasets rather than a general model such as ours. For example [1, 2] focus on the collection of data over time, whilst [2] note that for one time collection \\u201cdirect randomization on the true client\\u2019s value is sufficient to provide strong privacy protection\\u201d. Indeed this direct randomization approach is in line with our benchmark. These papers have been discussed in the literature review within Section 1.\\n\\n**Major Point 5:** We tested our algorithm on MNIST, which contains 784 features, and do not consider this to be a low-dimensional problem. Even if, in the context of SOTA non-DP computer vision work, one might categorise MNIST as low dimensional, this is certainly not the case in the context of LDP research. By applying the algorithm to both MNIST and Lending Club, we demonstrate the versatility of the approach. It should theoretically work on any data type for which you can train a VAE to learn good representations. This allows us to make use of a vast body of work that has already been achieved in the generative modelling community.\\n\\n**Minor Points:** All three minor points have been addressed in the updated manuscript.\\n\\nWe hope that these clarifications as well as the updates to our manuscript have addressed the reviewer's comments, and that the reviewer will consider revising their score accordingly.\\n\\n[1] Ding et al. - Collecting telemetry data privately, 2017 \\n[2] Erlingsson et al. - RAPPOR: Randomized aggregatable privacy-preservingordinal response, 2014.\"}",
"{\"title\": \"Related work extended; performance justified\", \"comment\": \"Thank you for the valuable feedback; we hope that points below adequately address your concerns.\", \"re\": \"\\u201cThe related work and comparisons are not enough\\u201d\\n\\nWe have extended our literature review to give a greater overview of this research in this field.\"}",
"{\"title\": \"Summary note\", \"comment\": \"We would like to thank each of the reviewers for their feedback.\\n\\nTwo common themes arose which we have addressed extensively. Firstly, it was requested that we add more rigorous proofs of how our approach produces $\\\\epsilon$-LDP representations of data at both latent and feature level. These proofs have been included in Appendix A and B. Secondly, we have included a more thorough literature review, which outlines some recent existing work as highlighted by reviewers, and importantly, how our approach improves upon this work.\\n\\nWe feel that this review process has improved our paper and hope both the reviewers and readers of the paper agree.\"}",
"{\"title\": \"Privacy guarantees clarified; sufficiency of empirics justified\", \"comment\": \"We thank the reviewer for their detailed feedback. Point-by-point responses are provided below:\\n\\n**Points 1 & 2:** We chose the Local Laplace mechanism as it guarantees (epsilon, delta=0)-LDP, while the Gaussian mechanism guarantees (epsilon, delta>0)-LDP which is not as strict. We believe that the method should work using the Gaussian mechanism; this would require a minimal change to the model (changing the prior and approximate posterior to be Gaussian). A paragraph has been added to Section 2 to discuss the choice of Laplace versus Gaussian mechanisms. We have also added a formal definition of the local Laplace mechanism in Section 2, and a proof that this guarantees LDP in Appendix A. We have added a proof that LDP is immune to post-processing in Appendix B.\\n\\n**Points 3 & 7:** There are two separate types of data in our approach: data which is transferred between entities and data which is only used for training the VLM. The latter must satisfy CDP so that the VLM parameters can be shared, while the former must satisfy LDP so that it can be safely shared. Because these two requirements are completely independent, keeping track of privacy is relatively straightforward: \\n* DP training (e.g. DP-Adam) must be used when training the parameters of the VLM that are to be shared. This is done in stage 2 of VLM training as outlined in Section 3.1.\\n* LDP is required for transferring the data. This is achieved when the VLM adds noise in the latent space.\\nWith this clarification of the two-stage approach, and the proofs that have now been added to Appendix A and B, the LDP and CDP guarantees are proved.\\n\\n**Point 4:** Our approach aims to tackle problems in which data must be privately shared between entities, thus requiring a LDP mechanism. The experiments in our paper require data sharing, and thus CDP approaches such as those referenced, cannot be used to tackle this problem. Consequently, they cannot be applied as benchmarks in our work.\\n\\n**Point 5:** Thank you for pointing this out. We had missed a citation for [Gylberth et al. - Differentially Private Optimization Algorithms for Deep Neural Networks] who show that DP-Adam satisfies CDP.\\n\\n**Point 6:** With respect to privacy leakage, we have clarified this concern in our response to points 3 & 7 above. The reviewer also comments on the relevance of our data set choices. We describe in Sections 4.1-4.3 and Section 5 multiple different scenarios in which the dataset assumptions we have made are highly relevant.\\n\\n**Point 8:** We agree that with access to a similar, pre-training dataset $D_1$, we are at an advantage over our baseline, and this is the motivation for the paper. We are demonstrating that, with some knowledge about the structure of our data distribution, one can learn a mechanism to privatise the data which is more effective than a fixed mechanism. This setup is a common scenario for organisations looking to privately exploit data, as we have articulated in our applications. \\n\\nIn summary, we have made significant improvements to the manuscript in response to some of the reviewer's comments (e.g. extensive detail added to address the privacy guarantees), and hopefully provided helpful clarification in cases where perhaps the reviewer misunderstood our work (e.g. the applicability of central DP). With this, we hope the reviewer will reconsider their score.\"}",
"{\"title\": \"Requisite privacy-guarantee proofs added\", \"comment\": \"We thank the reviewer for their valuable feedback, and respond to their comments below.\\n\\n**Weak point 1:** The reviewers primary concern was the lack of explicit detail around proving that our approach satisfies the requirements of LDP. This was an oversight on our part, and we have now added multiple appendices with proofs as well as commentary within the body of the paper to further clarify. \\n\\nIn particular, we have added an explicit definition for the local Laplace mechanism (see Equation 5), as well as the accompanying proof that the local Laplace mechanism satisfies epsilon-LDP in Appendix A (the proof that LDP is immune to post-processing is additionally provided in Appendix B). We have also updated Section 3.1 to clarify that sampling from $q_\\\\phi(z|x)$ is equivalent to a Laplace mechanism $\\\\mathcal{M}^\\\\text{(local)}\\\\left(x, \\\\mu_{\\\\phi}(\\\\cdot), \\\\epsilon_x \\\\right)$, since sampling from $q_\\\\phi(z|x)$ involves passing a datapoint through the mean function $\\\\mu_\\\\phi(x)$ and adding laplace noise of scale $b = \\\\Delta\\\\mu_\\\\phi / \\\\epsilon_x$, where $\\\\Delta\\\\mu_\\\\phi$ is the sensitivity of $\\\\mu_\\\\phi(.)$, as determined by the clipping function. \\n\\nWe now have all formal results in the paper needed to fully address the reviewers main concern, and hope that consequently the reviewer will consider raising their score.\\n\\n**Weak point 2:** For clarification, clean accuracy refers to the accuracy of our classifier when applied to a clean datapoint at inference time, while private accuracy refers to the accuracy of our classifier when applied to an LDP datapoint. In every case training is done on LDP data. In order to clarify these definitions to readers, the second paragraph of Section 4 has been updated significantly.\\n\\nWith the proofs of privacy added and clarifications of the mechanism made in the text, we feel that we have addressed the primary concern of the reviewer. We hope they agree and will consider revising their score accordingly.\"}",
"{\"title\": \"This work proposes an application-agnostic way to generate LDP representations of sensitive data or synthetic data that satisfies LDP. The proposed approach is effective for high-dimensional data. Downstream ML tasks can take these representations or synthetic data without worrying about privacy leakage, and achieve better accuracy than existing LDP solutions\", \"review\": \"Strong point 1: The idea of putting noise insertion (via noisy data-generation models) and optimization of good representations together to obtain LDP representations and/or synthetic data seems to be effective. While (6) relies on some independency assumptions, it might be fine in most cases and empirical evidence is reported to support it\", \"strong_point_2\": \"It is an application-agnostic approach and theoretically any downstream tasks and models can be supported... When there is a label, the privacy budget is split and random perturbation is used on labels\", \"strong_point_3\": \"It outperforms naive LDP baselines (with noise added directly to features) a lot in experiments\", \"weak_point_1\": \"The proof of the most important result is missing: It is said that \\\"sampling from $q_\\\\phi(z|x)$ produces a representation $\\\\tilde z$ of $x$ that satisfies $\\\\epsilon$-LDP. I don't think it is a trivial result and the author needs to everything together (including the analysis of sensitivity, the optimization algorithm, and so on) to formally prove it\", \"weak_point_2\": \"A minor issue: in figures of experiments, by \\\"clean accuracy\\\", do you actually mean \\\"accuracy\\\" (for some algorithms in the figures, it is privacy accuracy?)\\n\\nW1 is the main reason for the rating of 6 but not higher ones - highly encourage the authors to fix it before the publication\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Well written but lack of theoretical discussions and weak empirical studies.\", \"review\": [\"In this paper, the authors present a generative-model-based Laplace mechanism. By training the VAE on some dataset, the trained encoder can be used to privatize raw data towards epsilon, delta-LDP. Though the method is novel, the privacy guarantee of the proposed method is not clearly stated and proved. Related experiments are not convincing, either.\", \"**Strength**\", \"The paper is well written with a clear motivation, explanation of methodology. To my knowledge, I believe the work is useful for the privacy research community. The proposed method is also novel.\", \"**Weakness**\", \"The motivation to use the Laplace mechanism is not very clear. At the beginning of Sec. 2, the authors reason the usage by \\\"as it provides strong theoretical privacy guarantees\\\". This is not convincing for readers especially for those who are not familiar with LDP. Since the Laplace mechanism directly comes from the CDP, I would wonder how does the Gaussian mechanism works. How does the Laplace mechanism guarantee privacy better than the Gaussian mechanism? Reference or proof is essential here.\", \"In page 3, the authors briefly mention that the local version of the Laplace mechanism can be epsilon-LDP if the sensitivity is accordingly defined. This really lacks rigorousness. In the following sections, the authors refer to (Dwork and Roth, 2014) for the post-processing theorem. Since the work (Dwork and Roth, 2014) is mainly about CDP, I am not sure how the post-processing theorem can be adopted for LDP. Either reference or clear proof is required.\", \"Meanwhile, there lacks an end-to-end proof of the privacy guarantee of the VAE. I am not sure if the proposed VLM training guarantees privacy. Either, the privacy of encoding is not very clear. Especially, there involves a non-private training on stage 1.\", \"The experiments are run with pretty week baselines. Through this paper, the authors actively use the same conclusion from CDP (Dwork and Roth, 2014). Thus, I suppose the state-of-the-art CDP algorithms should also be applicable to the experimented tasks, e.g, classification. For the specific task, how well is the proposed compared to the SOTA CDP private learning algorithms? For example, (Abadi, et al., 2016), or (Phan, et al. 2017). Especially, (Phan, et al. 2017) also proposed an adaptive Laplace mechanism without depending on pre-training of the mechanism.\", \"In page 4, the DP-Adam mentioned in Stage 2 is not stated or proved in (Abadi et al., 2016). Only DP-SGD was discussed. A strict proof is required for the DP-Adam which intensively re-uses private results to help improve the gradients. Thus, the privacy guarantee is not straightforward.\", \"Seems the VLM training is using a non-DP optimizer at stage 1. Then how the whole training could guarantee privacy on the VLM training set. In experiments, the VLM training set is directly extracted from the private dataset (MNIST). Even though the author experiments with diverse D_1 D_2 distribution for VLM train/test in Sec 4.2, the two datasets are still from the same dataset. In practice, when such a D_2 is private, it is hard to find a D_1 to be non-private. I am afraid this could cause serious privacy leakage. Therefore, I doubt if the experimental results are useful for proving the effectiveness of a private algorithm. More realistic dates should be used.\", \"In Sec 4.1, the authors run the experiments in two steps. First, the VLM is trained with 'a DP encoder using D_1'. I am not clear how the DP encoder comes from. Does the VLM is also trained with DP? The setting has to be clarified.\", \"The experiment comparison seems not fair for baselines. For VLM, there are two datasets for training VLM and encoding classification train data. However, the baseline only has classification training data. The VLM encoder has additional information about the data distribution or the noise (by back-propagation in VLM training). The unfairness in the information could be the core reason for the difference in performance. How does the baseline perform if it is pre-trained and tuned (hyper-parameters) on another dataset?\", \"(Phan, et al. 2017). Adaptive Laplace Mechanism: Differential Privacy Preservation in Deep Learning\"], \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review\", \"review\": \"For LDP, when applying noise directly to high-dimensional data, the required noise entirely destroys data utility. In this paper, authors introduce a novel, application-agnostic privatization mechanism that leverages representation learning to overcome the prohibitive noise requirements of direct methods. They further demonstrate that this privatization mechanism can be used to train machine learning algorithms across a range of applications. They achieve significant gains in performance for high-dimensional data.\\n\\nIn this paper, authors have benchmarked results against such a mechanism, in which add Laplace noise to all continuous features, and flip each of the categorical features with some probability. \\nFor high-dimensional datasets, features are often highly correlated; consequently, noising features independently is wasteful towards privatizing the information content in each datapoint. A more effective approach to privatization involves noising a learned lower-dimensional representation of each datapoint using a generic noising mechanism. Applying the Laplace mechanism thus ensures the encoded latents, as well as reconstructed datapoints satisfy LDP. They focus on classification tasks. At inference time, they show that this classififier can act on either clean or privatized datapoints,.\\n\\nFor the writing, it\\u2019s better to give a clear algorithm. For the experiment, when epsilon<=10, the accuracy is not very good. The related work and comparisons are not enough. They are quite a few work about LDP learning can be literature reviewed. By the way, we usually use private data rather than privatized data.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"A well-written paper on an important subject. Good results, needs some improvements.\", \"review\": \"Summary:\\nThis paper presents a new privatization mechanism for Local Differential Privacy based on representation learning. The proposed VAE-based method is used for the low-dimensional latent representation of the data and uses the Laplace mechanism to satisfy Local DP. The paper shows this mechanism can be used across various applications such as private data collection, private novel-class classification, data joining, etc.\\n\\nThe paper is clear and easy to follow. The proposed method provides a great solution for data sharing with the local DP and can be used in many real-world applications. However, some clarification is needed. The experimental results can be improved by adding more baselines. Some important/recent references are also missing.\", \"major_comments\": \"- It would be better if the authors add a related work section or extend the literature review paragraph in Introduction and include more recent work in this area and point out how the proposed work advances the state-of-the-art. There are several works on DP for high-dimensional data such as:\\nAutoGAN-based Dimension Reduction for Privacy Preservation by Nguyena et al.\", \"p3gm\": [\"Private High-Dimensional Data Release via Privacy Preserving Phased Generative Model by Takagi et al.\", \"There are also several existing works on LDP based on VAE. The authors are expected to state the differences between the existing work and the proposed work.\", \"The authors mentioned the DP Synthetic data models need large data, this is also the case for training the VAE in this work. Also, they mentioned these techniques need labeled data. DP-GAN models need access to real data for training (no label is required) and then it can be used indefinitely for generating synthetic data. Please clarify this.\", \"In an existing work (P3GM paper mentioned above), it is shown that VAE\\u2019s objective function is too sensitive to the noise of DP-SGD, how the authors tackle this problem? And how it affects the final results?\", \"The baseline methods are limited to \\\"direct noise features\\\" only. It would be better if the authors use other techniques such as some recent work on LDP on high dimensional data or other general DP classifiers, DP-SGD, PATE, etc. as the baselines for this experiment.\", \"It would be better if the authors also showed the performance of the proposed method on higher dimensional data (e.g., Lending club has only 23 features)\"], \"minor_comments\": [\"In the introduction, please list the contribution of this work.\", \"Please define all variables in Eq 5-6.\", \"Please add more up-to-date references.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
gJYlaqL8i8 | Learning to Sample with Local and Global Contexts in Experience Replay Buffer | [
"Youngmin Oh",
"Kimin Lee",
"Jinwoo Shin",
"Eunho Yang",
"Sung Ju Hwang"
] | Experience replay, which enables the agents to remember and reuse experience from the past, has played a significant role in the success of off-policy reinforcement learning (RL). To utilize the experience replay efficiently, the existing sampling methods allow selecting out more meaningful experiences by imposing priorities on them based on certain metrics (e.g. TD-error). However, they may result in sampling highly biased, redundant transitions since they compute the sampling rate for each transition independently, without consideration of its importance in relation to other transitions. In this paper, we aim to address the issue by proposing a new learning-based sampling method that can compute the relative importance of transition. To this end, we design a novel permutation-equivariant neural architecture that takes contexts from not only features of each transition (local) but also those of others (global) as inputs. We validate our framework, which we refer to as Neural Experience Replay Sampler (NERS), on multiple benchmark tasks for both continuous and discrete control tasks and show that it can significantly improve the performance of various off-policy RL methods. Further analysis confirms that the improvements of the sample efficiency indeed are due to sampling diverse and meaningful transitions by NERS that considers both local and global contexts. | [
"reinforcement learning",
"experience replay buffer",
"off-policy RL"
] | Accept (Poster) | https://openreview.net/pdf?id=gJYlaqL8i8 | https://openreview.net/forum?id=gJYlaqL8i8 | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"Z8YcUVf3ZA8",
"_osVIq05oZ",
"kjpLzlmLPZz",
"75K_grBoI94",
"prjyUmVPFqZ",
"ED6BeS6h8Qy",
"L6UUqq22qif",
"l1mBuzFfqal",
"AFrY8Tn6ywS",
"v-vN_YpJDD2",
"oWLlXXhr0u",
"XgmjEC5m9H5",
"_h8jJJnxMjm"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040406652,
1606223038446,
1606214985179,
1606200076597,
1606068588915,
1606010026369,
1605424587982,
1605424536203,
1605424474050,
1605424250466,
1604331982480,
1603944412177,
1603906055493
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3454/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3454/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3454/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3454/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3454/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3454/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3454/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3454/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3454/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3454/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3454/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3454/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Poster)\", \"comment\": \"All reviewers agree that this paper is worth publishing. It investigates a novel idea on how to adaptively prioritise experiences from replay based on relative (within-batch) importance. The empirical investigation is thorough, and while the performance improvements are not stunning, the benefit is surprisingly consistent across many environments.\"}",
"{\"title\": \"Answer for Frameskip\", \"comment\": \"We have now realized why the reviewer has been confused. The term **NoFrameskip-v4** appears in the supplementary material. For NERS under Rainbow, our implemented code first loads an environment with the NoFrameskip-v4 version (please see line 15-16 of env.py in the uploaded code). After that, the environment does four times repeated actions whenever the step function is called (line 58-66 of env.py). Since it makes readers confused, we have revised it by removing the term in the supplementary material. Thank the reviewer for cooperating our manuscript more readable.\"}",
"{\"title\": \"Re: Frameskip\", \"comment\": \"Thanks for clarifying about the usage of frameskip. I'm still a bit confused about the level descriptions, which include the phrase \\\"NoFrameskip-v4\\\". I'm guessing this means no frameskip is included in the environment by default, but then the 4 frameskip used is added afterwards? Not a big deal, but a bit confusing.\"}",
"{\"title\": \"Thanks for your response.\", \"comment\": \"Thanks for your response, and I updated my rating.\"}",
"{\"title\": \"Revision: Interesting idea after clarifying the misunderstanding.\", \"comment\": \"> Q1. The sampler is trained so that it prioritizes high reward timesteps as in eq 6. But this is dubious. What if we need to sample the N timesteps right before a high-reward timestep, even though those preceding timesteps do not have high return themselves?\\n\\n* Thank you for pointing out my misunderstanding. The idea makes a lot more sense now it\\u2019s clearer to me what the sampler update part is actually doing. The other two reviewers shared the same feeling that it was unclear in the first version, especially around whether you get the expected reward of the current and past policy. I will revise my rating accordingly. \\n\\n> Q3. The experiment section is not convincing. The model has not been trained till convergence. e.g. SAC and ERO has all been trained with 1e6+ steps at least.\\n\\n* My comment \\u201cThe model has not been trained till convergence.\\u201d came from Figure 3 rather than from Table 1, which I now see that you mentioned it was trained for 0.5M steps. Apologies for the seemingly rushed review.\"}",
"{\"title\": \"Response to Reviewer #1 (1/2)\", \"comment\": \"We sincerely appreciate your time and efforts in reviewing our paper, as well as the constructive comments. We respond to each of your comments one by one.\\n\\n**Q1. The reasoning behind the task selection should also be made explicit. The Atari subset used here is a bit unusual, particularly the choice to not use frame-skip.**\\n\\n- Please note that we **do use frame-skip**of 4. For Atari tasks, we have used completely **the same configurations as the ones used in [1]**. For detailed configuration, please see training details in **Page 17**of our revised manuscript. We also have explained how we have chosen environments in the first subsection of Section 3.1 on page 5. \\n\\n- We have included additional results on **more Atari tasks** in Table 2 (Page 7) of our revised manuscript.\\n\\n---\\n**Q2. The number of random seeds should also be mentioned.**\\n\\n- We used $5$ random seeds. Although we already mentioned that five instances had been used, we will further clarify that this denotes five random seeds. Please see captions in Figure 3, 4, 5 (page 6-8) in our revised manuscript, respectively.\\n\\n---\\n**Q3. Investigating the sampling decisions of NERS is attempted in Figure 4, but further work should be done to provide evidence to the 'diversity of samples' claim. Ideally, NERS wouldn't just trade off TD error and Q-value over time, but also within each batch. Reporting something like the average minibatch Q-value/ TD-error standard deviation on NERS vs other methods would be nice. A qualitative evaluation akin to Figure 1 would also help guide intuition.**\\n\\n- We appreciate the reviewer\\u2019s suggestion. We agree that what you suggested will be an effective way to show the diversity, so obtained the following table (please see also Table 3 and the last paragraph of Section 3 in page 9 in our revised manuscript). The table shows sampled transitions' statistical values for $Q$-values and TD-errors on Pendulum-v0 under SAC at 10,000 training steps with initially 1,000 random actions. It is easily observable that **NERS has a higher standard deviation of TD-errors and $Q$-values than RANDOM and ERO.** Furthermore, NERS still has a higher average of TD-errors and $Q$-values than others. Although PER has the highest standard deviation for TD-errors, it also has the lowest standard deviation for $Q$-values. It means that PER sampled biased transitions. \\n\\n---\\n| Method \\t| STDEV of TD-errors \\t| STDEV of $Q$-values \\t| AVG of TD-errors \\t| AVG of $Q$-values \\t|\\n|:------:\\t|:------------------:\\t|:-----------------:\\t|:----------------:\\t|:---------------:\\t|\\n| NERS \\t| 723.01 \\t| 65.22 \\t| 87.54 \\t| -104.13 \\t|\\n| RANDOM \\t| 528.76 \\t| 60.46 \\t| 62.43 \\t| -120.13 \\t|\\n| PER \\t| 1256.71 \\t| 49.78 \\t| 139.49 \\t| -138.05 \\t|\\n| ERO \\t| 560.56 \\t| 59.16 \\t| 65.44 \\t| -119.03 \\t|\\n\\n---\\n- Please note that if the diversity is low, it is intuitively difficult to maintain high values of TD-errors since the updated actor and critic networks **will decrease the values**, and the Q-value **will not increase** without observing new transitions. \\n\\n- Thus, NERS having both high Q-values and TD-errors in Figure 5 (Page 8 in the revision) over RANDOM suggests that NERS samples more diverse transitions which are effective to both the actor and the critic networks while training. \\n\\n- The **diversity of the samples** claim refers to NERS's **ability to** sample transitions with different criteria (focusing on either the Q-value, TD-errors, rewards, or even the raw states).\\n\\n\\n\\n---\\n[1] van Hasselt, Hado P., Matteo Hessel, and John Aslanides. \\\"When to use parametric models in reinforcement learning?.\\\" Advances in Neural Information Processing Systems. 2019.\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"We thank you for your time and effort in reviewing our paper. We respond to your comments below:\\n\\n---\\n**Q1. The sampler is trained so that it prioritizes high reward timesteps as in eq 6. But this is dubious. What if we need to sample the N timesteps right before a high-reward timestep, even though those preceding timesteps do not have high return themselves?**\\n\\n- This is a critical misunderstanding. Please note that our NERS **does not**prioritize high-reward timesteps. Please note that Eq 6. is the expected **cumulative reward**, and not a reward at a specific timestep. Thus, our method will sample transitions before a high-reward timestep, if they help the model to eventually transit to the high-reward timestep, since doing so will maximize the **cumulative reward.** \\n\\n- We have shown this point with the experiments in the Pendulum* environment, which is modified from the original Pendulum environment to have **sparse rewards**, such that the agent receives a reward only when the rod is in a **near-upright position** (Please see the footnote in Page 5, and the description of Pendulum* environment in Page 13). Thus we need preceding transitions in order to get to the high-reward (upright) position, although they provide **no reward**at all. The Figure 3 (a) and (d) on page 6 shows that NERS achieves significantly better convergence and performance over baseline sampling methods in this sparse reward environment.\\n\\n---\\n**Q2. How do you explain why \\\"NERS focuses on sampling transitions with high TD-errors in the beginning, ... as the timestep progresses, it samples transitions with both high TD-errors and Q-values (diverse)\\\", given that it's trained with a single objective to maximize sampled reward? \\\"using various features in an advanced manner.\\\" is not a satisfactory explanation.**\\n\\n- NERS is **not**trained to sample transitions with maximum rewards, and is trained to maximize the **expected cumulative rewards**, thus it will learn to sample any transitions that help maximize the **cumulative reward**, by dynamically weighting various features (e.g. TD-errors, Q-values, and raw features) based on their contributions in the course of training. \\n\\n- The plots of Q-value and TD-errors in Figure 5 thus shows that NERS is able to sample transitions by **dynamically focusing on different features**at different stages of training. In the beginning, the critic network for value estimation is not well trained, and thus excessive learning of the agent may be harmful in early steps, and thus it is reasonable that NERS selects transitions with high TD-errors to focus on updating critic networks in early training iterations (Figure 5(d-f)), while focusing both on transitions with both high $Q$-values and TD-errors as training goes on (Figure 5(a-c)). This is a unique trait of NERS that contributes to its success. \\n\\n---\\n\\n**Q3. The experiment section is not convincing. The model has not been trained till convergence. e.g. SAC and ERO has all been trained with 1e6+ steps at least.**\\n\\n- We report our results at 0.5M iteration, since many existing works that focus on sample-efficient reinforcement learning also report their results before full convergence [1, 2, 3, 4]. \\n\\n- We believe that 0.5M is more than sufficient in our case, since we use SAC and TD3, which are known to converge faster than DDPG that is used by ERO. \\n\\n- Also, we empirically observed that in the MuJuCo environments, if there is are meaningful differences between different sampling methods in early steps (say, 0.3M steps), there is no change of their relative rankings even with more iterations. We will include the results with more number of iterations in the final version of the paper.\\n\\n---\\n**Q4. In addition, what is the reason to do Hopper from Mujoco instead of Hopper-V2 from the OpenAI gym? With the latter, you can compare with the numbers published in ERO.**\\n\\n- We apologize for the confusion. The MuJoCo tasks are completely the same as the tasks provided by the OpenAI gym. We have clarified the precise version information in all Figures in page 6, 7, and 8, in the revised manuscript. \\n- Since the experiments in ERO [2] is conducted only with DDPG algorithm, it is difficult to compare with the reported performances. Note that we validated all sampling methods with two RL algorithms that are known to work better (SAC, TD3).\\n\\n---\\n\\n[1] Fujimoto, S., H. van Hoof, and D. Meger. \\\"Addressing function approximation error in actor-critic methods.\\\" Proceedings of Machine Learning Research 80 (2018): 1587-1596.\\n\\n[2] Wang, Che, and Keith Ross. \\\"Boosting Soft Actor-Critic: Emphasizing Recent Experience without Forgetting the Past.\\\" arXiv preprint arXiv:1906.04009 (2019).\\n\\n[3] Zha, Daochen, et al. \\\"Experience replay optimization.\\\" In Proceedings of the 28th International Joint Conference on Artificial Intelligence. AAAI Press, 2019. p. 4243-4249.\\n\\n[4] Haarnoja, Tuomas, et al. \\\"Soft Actor-Critic Algorithms and Applications.\\\" arXiv (2018): arXiv-1812.\"}",
"{\"title\": \"Response to Reviewer #2 (1/2)\", \"comment\": \"We thank you for your time and effort in reviewing our paper, as well as constructive comments. We have revised the manuscript by faithfully reflecting your comments. We respond to your comments below:\\n\\n---\\n**Q1. In the original prioritized ER paper, they use a ratio to mitigate the biased sampling issue, did the authors ever visualize what the result would be (say in Figure 1) if you use that ratio?**\\n\\n- In the PER [1] paper, the authors used $\\\\alpha=0.7$, which results in more biased sampling than our version of the PER, which used $\\\\alpha=0.5$. Since PER performs more sampling as $\\\\alpha$ increases, using the $\\\\alpha$ in the original paper will result in even more severe biased sampling based on TD-errors.\\n\\n---\\n**Q2. Eq (2) the input feature can be highly non-stationary/unstable. For example, some of the variables may decrease all the time, and some others may increase all the time. Intuitively, training with such data should be very challenging. It looks like that in order to resolve the issue of biased sampling, the authors introduce an even more difficult task. Do the authors have some comments about this?**\\n\\n- We agree that predicting **precise values**from the inputs will be a highly difficult problem, as such is known to be very challenging in cases such as multi-agent RL. However, please note that our NERS **does not predict precise importance**but rather estimates the **relative importance**of samples chosen from the buffer. Since this is all the information we need. As shown in Figure 3(d) and Figure 5(a), such consideration of relative importance by NERS actually results in **more stable training**compared to ERO.\\n\\n---\\n**Q3. I am confused about how the sampling network is updated. In Algorithm 1, my understanding is that if the current time step is the end of an episode, then update the sampling network. Is it correct? Note that Algorithm 1 indicates that the actor, critic are updated at each time step. But the paper also says that the NERS is updated at each evaluation step and this means that throughout the evaluation episode the policy should be fixed to estimate (6). Can the author further explain how the network is updated? Using evaluation to learn parameters seems unrealistic, the evaluation may happen only one time in practice. By evaluation, my understanding is that how much we can gain if we deploy such a policy. If the updating NERS requires to use of evaluation data, this largely limited usability.**\\n\\n- Thank you for the insightful comment. We set the replay reward calculated by each evaluation since performing evaluation multiple times is not very difficult in standard environments. \\n\\n- However, as you mentioned, it will be difficult to compute the reward in environments where evaluation is allowed only once. To resolve this issue, we have slightly modified the replay reward such that NERS can use the reward obtained from cumulative rewards at each **training episode**. We report its performance in the experiments in Figure 4(b) and Figure 4(c) of the revision (NERS*). We can see that its performance is almost similar to original NERS on BipedalWalker-v3 and LunarLanderContinuous-v2. We thank you for the insightful suggestion as this will further enhance the usability of our method.\"}",
"{\"title\": \"Response to Reviewer #2 (2/2)\", \"comment\": \"---\\n**Q4. I am also confused by the statement \\u201cThe replay reward is interpreted as measuring how much actions of the sampling policy help the learning of the agent for each episode.\\u201d Eq (6) says that the reward is actually how much it improved from the current evaluation to the previous one. No matter you use the special sampling method or not, it should make an improvement. So this difference does not indicate \\u201chow much the sampling distribution can help.\\u201d**\\n\\n- We agree that the increase of the reward can be made both by the agent learning a better policy, or the experience sampler sampling more effective samples. Thus the reward may increase regardless of the sampling policy we use due to the training of the agent's policy, but note that we need to sample better transitions in order to obtain **larger increase**of the reward.\\n\\n---\\n\\n**Q5. And one question, how do you implement the Prioritized ER? Do you also use the importance ratio to anneal the bias as described in section 3.4 in that paper (https://arxiv.org/pdf/1511.05952.pdf)? Do you ever tune a bit the parameter beta in that formulae?**\\n\\n- Yes we **do use $\\\\alpha=0.5$ to anneal the bias**, which is smaller than $\\\\alpha=0.7$ used in the original paper, and thus our PER sampler is less biased. We used this ratio since it is known to work well (Rainbow, Hessel et al. 2018). We linearly increase $\\\\beta$ from $0.4$ to $1$. Page 17 (Supplementary file) of the revision provides the detailed configurations of the PER.\\n\\n---\\n**Q6. An additional note about related work. The sampling distribution is an important problem in RL and is not well/rigorously studied. I really think it worths a more complete discussion of related work. For example, the author might also discuss the Langevin dynamics Monte Carlo sampling method in RL (Frequency-based search control in Dyna by Pan et al.), as their sampling distribution is supported by intuition and suggestive theoretical evidence and they show their method is better than prioritized ER and ER.**\\n\\n- We appreciate the insightful comment. We also agree that the sampling distribution is critical for model-based RL. We have stated it from the line 17 of the Related Work section in Page 10 (How to sample is also a\\ncrucial issue to model-based RL algorithms ...) in our revised manuscript.\\n\\n---\\n\\nWe sincerely thank you for your insightful comments since they helped us to further improve the discussions in the paper. Please let us know if there is anything else we should address, or misunderstood. \\n\\n---\\n\\n[1] Schaul, Tom, et al. \\\"Prioritized experience replay.\\\" arXiv preprint arXiv:1511.05952 (2015).\\n\\n[2] Zha, Daochen, et al. E\\\"xperience replay optimization\\\". In Proceedings of the 28th International Joint Conference on Artificial Intelligence. AAAI Press, 2019. p. 4243-4249.\\n\\n[3] van Hasselt, Hado P., Matteo Hessel, and John Aslanides. \\\"When to use parametric models in reinforcement learning?.\\\" Advances in Neural Information Processing Systems. 2019.\"}",
"{\"title\": \"Response to Reviewer #1 (2/2)\", \"comment\": \"---\\n**Q4. The off-policy RL related works section is a bit over-long. Having skimmed the ERO paper it is definitely the most-closely related, and as such deserves a bit more time spent on discussing the differences.**\\n\\n- We thank the reviewer for helpful suggestions. We have reduced the length of statements for off-policy RL in the Related Work section. We have also provided detailed discussion of the difference between NERS and ERO in the paragraph below Equation (7). We also summarize the differences between ERO and NERS below:\\n\\n- First of all, ERO learns the sampling rate for each individual transition with a simple MLP, without consideration of its relative importance over other samples. However this approach will score two near-redundant transitions to have similar importance. On the other hand, NERS learns the **relative importance**among all transitions in the given batch with a **permutation-equivariant set function** (Please see Figure 2). \\n\\n- Secondly, ERO performs two-stage sampling (Bernoulli sampling followed by random sampling), which is both ineffective as it requires $O(N)$ operations and inefficient since it results in sampling an **excessive number of redundant transitions**. Thus they use a random sampling to further reduce the samples but this makes ERO to behave similarly to that of the random sampling (Please see the paragraph above Eq.(5), in page 4). Contrarily, NERS performs prioritized sampling (from a Dirichlet distribution from the softmax output), which can be efficiently done in $O(logN)$ operations (using sum-tree). Further NERS performs importance sampling and places weights on the samples even after sampling (Eq.5). \\n\\n---\\n**Q5. How are these expectation evaluated in practice? I'd assume it'd just be the difference of value functions before and after the update, but the appendix suggest a more involved computation that doesn't appear to have been made explicit anywhere.**\\n\\n- We calculated the replay reward (Eq. (6)) as the difference in the **cumulative rewards**between the current and previous evaluations. The line 4-8 on page 18 in the supplementary material, we mentioned that each off-policy algorithm conducts evaluations at a fixed frequency, and we compute the replay reward at each evaluation step.\\n\\n---\\n**Q6. Final small point, towards the end a bi-GRU is mentioned as being used and I can't see where that'd come into play. Perhaps just a typo?**\\n\\n- This is indeed a typo and we have revised in in the updated manuscript. We apologize for the mistake, and thank you for the correction.\\n\\n\\n---\\n[1] van Hasselt, Hado P., Matteo Hessel, and John Aslanides. \\\"When to use parametric models in reinforcement learning?.\\\" Advances in Neural Information Processing Systems. 2019.\"}",
"{\"title\": \"A sound idea and evaluation\", \"review\": \"EDIT: The statements about ERO clarify the contribution considerably. 6-->7\\n\\nThe authors propose an adaptively sampling mechanism optimized for policy improvement (NERS). By incorporating minibatch-wide information into the sampling score (while maintaining permutation invariance), they are able to out-perform reasonable baselines on a wide range of tasks.\\n\\nWhile NERS rarely beats other methods decisively, it has a strong showing across continuous and discrete action tasks and with a variety of off-policy learners. However, a few things would strengthen the empirical results. Some notation of spread should be reported on all of the Tables (e.g. standard error). Number of random seeds should also be mentioned. The reasoning behind the task selection should also be made explicit. The Atari subset used here is a bit unusual, particularly the choice to not use frame-skip. Investigating the sampling decisions of NERS is attempted in Figure 4, but further work should be done to provide evidence to the 'diversity of samples' claim. Ideally, NERS wouldn't just trade off TD error and Q-value over time, but also within each batch. Reporting something like the average minibatch Q-value/ TD-error standard deviation on NERS vs other methods would be nice. A qualitative evaluation akin to Figure 1 would also help guide intuition.\\n\\nThe off-policy RL related works section is a bit over-long, discussing things like the dueling architecture which don't seem to be overtly related apart from coming from the same sub-field. On the other side of things, having skimmed the ERO paper it is definitely the most-closely related, and as such deserves a bit more time spent on discussing the differences. For example, it is a bit unclear if the two sampling reward functions are different. An uncharitable reading of this paper would be that it is just an architecture tweak on top of ERO, and while the empirical results help dispel this idea, I think a more explicit comparison would still be useful.\\n\\nA related point is that the reward function for the sampler is quite unclear (Equation 6). How are these expectation evaluated in practice? I'd assume it'd just be the difference of value functions before and after the update, but the appendix suggest a more involved computation that doesn't appear to have been made explicit anywhere.\\n\\nFinal small point, towards the end a bi-GRU is mentioned as being used and I can't see where that'd come into play. Perhaps just a typo?\\n\\nOverall, I like this paper. Evaluating an idea across a variety of learning algorithms, observation and action spaces is no small feat, and the results are very solid. With a few tweaks and explanations this would be a very strong paper.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"The paper proposes an interesting idea to design sampling distribution to improve sample efficiency of deep RL algorithms. But there are several nontrivial issues to be resolved.\", \"review\": \"Observing that the existed ER-based sampling method may introduce bias or redundancy in sampled transitions, the paper proposes a new sampling method in the ER learning setting. The idea is to take into consideration the context, i.e. many visited transitions, rather than a single one, based on which one can measure the relative importance of each transition. Specifically, the weights of transitions are also learned through a Reinforce agent and hence the sampling distribution is learned to directly improve sample efficiency.\\n\\nClarity. The presentation is clear in most places. But I do feel the core part of updating the sampling policy needs clarification. \\n\\nQuality. Please see the questions below.\\n\\nOriginality/Significance. The method is novel and is potentially interesting to the RL research community. \\n\\nIn the original prioritized ER paper, they use a ratio to mitigate the biased sampling issue, did the authors ever visualize what the result would be (say in Figure 1) if you use that ratio? \\n\\nEq (2) the input feature can be highly non-stationary/unstable. For example, some of the variables may decrease all the time, and some others may increase all the time. Intuitively, training with such data should be very challenging. It looks like that in order to resolve the issue of biased sampling, the authors introduce an even more difficult task. Do the authors have some comments about this? \\n\\nI am confused about how the sampling network is updated. In Algorithm 1, my understanding is that if the current time step is the end of an episode, then update the sampling network. Is it correct? Note that Algorithm 1 indicates that the actor, critic are updated at each time step. But the paper also says that the NERS is updated at each evaluation step and this means that throughout the evaluation episode the policy should be fixed to estimate (6). Can the author further explain how the network is updated? \\n\\nUsing evaluation to learn parameters seems unrealistic, the evaluation may happen only one time in practice. By evaluation, my understanding is that how much we can gain if we deploy such a policy. If the updating NERS requires to use evaluation data, this largely limited usability. \\n\\nI am also confused by the statement \\u201cThe replay reward is interpreted as measuring how much actions of the sampling policy help the learning of the agent for each episode.\\u201d Eq (6) says that the reward is actually how much it improved from the current evaluation to the previous one. No matter you use the special sampling method or not, it should make an improvement. So this difference does not indicate \\u201chow much the sampling distribution can help.\\u201d\\n\\nIn the empirical study, NERS does not show a clear benefit from the learning curves. I believe it is better to average over a smoothing window before averaging over random seeds. Or do more runs. Doing more runs should not be that computationally expensive at least on Pendulum and LunarLander. And one question, how do you implement the Prioritized ER? Do you also use the importance ratio to anneal the bias as described in section 3.4 in that paper (https://arxiv.org/pdf/1511.05952.pdf)? Do you ever tune a bit the parameter beta in that formulae? \\n\\nAn additional note about related work. \\nThe sampling distribution is an important problem in RL and is not well/rigorously studied. I really think it worths a more complete discussion of related work. For example, the author might also discuss the Langevin dynamics Monte Carlo sampling method in RL (Frequency-based search control in Dyna by Pan et al.), as their sampling distribution is supported by intuition and suggestive theoretical evidence and they show their method is better than prioritized ER and ER.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting idea after clarifying the misunderstanding.\", \"review\": \"##########################################################################\", \"summary\": \"This paper proposes to improve sampling from the experience replay buffers that weights samples by their \\\"relatively usefulness\\\". \\n\\nThis paper proposes to use two encoders - one global that encodes across the current batch of experience replay sample, and one local that encodes each selected timestep. Using the encodings, a scorer scores the experiences and weights the actor and critic losses in proportion to that score. The sampler is trained to maximize the probability that it chooses high reward timesteps on average. \\n\\n\\n \\n##########################################################################\", \"pros\": \"Experience Replay is a widely used technique and improving on the naive random sample method makes sense. The analysis on other sampling methods is insightful. \\nThe idea to have a global feature pooled from the sample transitions is unique. \\nReweighing each sample's loss at training time is simple and effective.\\n\\n##########################################################################\", \"cons\": \"The paper's idea could be flawed. The sampler is trained so that it prioritizes high reward timesteps as in eq 6.\\nBut this is dubious. What if we need to sample the N timesteps right before a high-reward timestep, even though those preceding timesteps do not have high return themselves? And how do you explain why\\\"NERS focuses on sampling transitions with high TD-errors in the beginning, ... as the timestep progresses, it samples transitions with both high TD-errors and Q-values (diverse)\\\", given that it's trained with a single objective to maximize sampled reward? \\\"using various features in an advanced manner.\\\" is not a satisfactory explaination. \\n\\n\\nThe experiment section is not convincing. \\nThe model has not been trained till convergence. e.g. SAC and ERO has all been trained with 1e6+ steps at least. In addition, what is the reason to do Hopper from Mujoco instead of Hopper-V2 from the OpenAI gym? With the latter, you can compare with the numbers published in ERO.\\n\\n##########################################################################\", \"questions_during_rebuttal_period\": \"Please address and clarify the cons above\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
0NQdxInFWT_ | Active Deep Probabilistic Subsampling | [
"Hans van Gorp",
"Iris A.M. Huijben",
"Bastiaan S. Veeling",
"Nicola Pezzotti",
"Ruud Van Sloun"
] | Subsampling a signal of interest can reduce costly data transfer, battery drain, radiation exposure and acquisition time in a wide range of problems. The recently proposed Deep Probabilistic Subsampling (DPS) method effectively integrates subsampling in an end-to-end deep learning model, but learns a static pattern for all datapoints. We generalize DPS to a sequential method that actively picks the next sample based on the information acquired so far; dubbed Active-DPS (A-DPS). We validate that A-DPS improves over DPS for MNIST classification at high subsampling rates. We observe that A-DPS learns to actively adapt based on the previously sampled elements, yielding different sampling sequences across the dataset. Moreover, we demonstrate strong performance in active acquisition Magnetic Resonance Image (MRI) reconstruction, outperforming DPS and other deep learning methods. | [
"Compressed Sensing",
"subsampling",
"active acquisition",
"accelerated MRI"
] | Reject | https://openreview.net/pdf?id=0NQdxInFWT_ | https://openreview.net/forum?id=0NQdxInFWT_ | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"lElCevcLrF",
"IeYgK74BQ87",
"ZiStn5QgWsR",
"lFo0kbISg_e",
"XEoejfOTsWD",
"mKlj7AvGiYV",
"9X5Rx_U6rJa",
"AKFqqdsHpeD",
"AXUszoKRCFL",
"KI_Z-Ca98_b",
"Im23lluzCeN",
"gYuamGOTJMa"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040356145,
1606303203997,
1606303177526,
1606303135514,
1606303051630,
1606302802671,
1605350265507,
1605350147822,
1605350072748,
1603919433178,
1603894885014,
1603697403137
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3453/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3453/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3453/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3453/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3453/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3453/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3453/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3453/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3453/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3453/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3453/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"The review phase was very constructive, where reviewers raised several opportunities for improvements. The authors did a very good job in their rebuttal, which led some reviewers to change their opinion in a positive direction. Overall, reviewers agree that this is the borderline paper with remaining concerns about the weak experimentation. The paper was again discussed by the Area Chair and Program chairs. Due to the competitive nature of the conference and the high bar of experimental evaluations expected by empirical papers, the paper was finally rejected. We hope authors will use the feedback from the reviews and make a stronger submission in near future.\"}",
"{\"title\": \"Second reply to review #3\", \"comment\": \"We again would like to thank the reviewer for the constructive feedback and suggestions that they have given us. As a result, we have made the following changes to the manuscript:\\n\\n[Main review]:\\nWe have added additional experiments to our manuscript, at a 1\\\\% sampling ratio for the MNIST classification task, and at a 16 times acceleration factor for the MRI reconstruction task. Moreover, we have performed an insightful t-SNE analysis that (in addition to the performed quantitative assessment) enables qualitative interpretation of our method. We did this for both the MNIST classification and MRI reconstruction tasks. This analysis and associated discussion is included in the revised paper.\\n\\n[Comment 1]:\\nWe have revised the introduction section. Reducing the amount of text devoted to DPS in favor of more substantiation for A-DPS.\\n\\n[Comment 2]:\\nIn section 3.3 we have added an additional paragraph clarifying how the LSTM cell fits into our method.\\n\\n[Comment 3]:\\nWe have performed a statistical analysis on the performance gains made by A-DPS over DPS in the MRI reconstruction task, concluding that they are indeed statistically significant.\\n\\nWe hope that our changes will be well received and look forward to your final decision on our manuscript.\"}",
"{\"title\": \"Seond reply to review #2\", \"comment\": \"We again would like to thank the reviewer for the constructive feedback and suggestions that helped to improve our manuscript. As a result, we have made the following changes to the manuscript:\\n\\n[Novelty]: We have updated the introduction section to better reflect the added value of active acquisition over learned, but ultimately, static acquisition. Moreover, we have highlighted how this is an active field of research that is receiving a lot of attention recently by citing two newly published papers in the related work section. We emphasize the novelty and non-triviality of the proposed approach, while highlighting its simple and elegant methodological implementation. \\n\\n[Validation]:\\nWe would like to refer the reviewer to the above general response about comparison to other active acquisition baselines. We agree that now that several code releases (concurrent to this submission) have become available, future work should include such a comparison. At the time of submission this was not yet possible however, highlighting the timely nature of this work. \\n\\n[Clarity]: \\nFollowing the advice of the reviewer we have removed the toy example from the manuscript as it was deemed confusing and distracting from the main message of the paper.\\n\\n[Impact]:\\nWe have performed a statistical analysis on the performance gains made by A-DPS over DPS in the MRI reconstruction task, concluding that they are indeed statistically significant.\\n\\n[Comment 1]:\\nWe again thank the reviewer for raising this paper to our awareness, it has since been included in both the introduction as well as the related work section.\\n\\nWe hope that our changes will be well received and look forward to your final decision on our manuscript.\"}",
"{\"title\": \"Second reply to review #1\", \"comment\": \"We again would like to thank the reviewer for the constructive feedback and suggestions that helped to improve our paper. As a result, we have made the following changes to the manuscript:\\n\\n[Weakness 1]: We would like to refer the reviewer to the above general response about comparison to other active acquisition baselines. We fully agree that now that several code releases (concurrent to this submission) have become available, future work should include such a comparison. \\n\\n[Weakness 2]: Following the advice of the reviewer we have removed the toy example from the manuscript as it was deemed confusing and distracting from the main message of the paper.\\n\\n[Question 1]: We have updated Figure 3a (Now Figure 2a, as the toy example was removed) with the additional experiment that we performed at a 1% sampling ratio.\"}",
"{\"title\": \"General comment to all reviewers\", \"comment\": [\"We again would like to thank all of the reviewers for their time and the constructive feedback and suggestions that they have given us. We would like to use this official comment to list all of the changes that were made to incorporate the given feedback.\", \"We have revised the introduction section. Reducing the amount of text devoted to DPS in favor of more substantiation for A-DPS.\", \"We have updated the related work section to include three extra sources. One of which was raised to our attention by reviewer 2 (Ji et al., 2008), and the other two that have been published in this field since the initial deadline of the ICLR 2021 conference (Pineda et al., 2020 and Bakker et al., 2020).\", \"In section 3.3 we have further clarified how the LSTM cell fits into our method.\", \"Following the advice of the reviewers we have removed the toy example from the manuscript as it was deemed confusing and distracting from the main message of the paper.\", \"At the request of Reviewer 1, we have expanded the sampling ratios explored in the MNIST classification task by adding the results at a 1\\\\% sampling ratio.\", \"We have performed an insightful t-SNE analysis that (in addition to the performed quantitative assessment) enables qualitative interpretation of our method. We did this for both the MNIST classification and MRI reconstruction tasks. This analysis and associated discussion is included in the revised paper.\", \"We enriched our experimental section by performing an additional MRI experiment at a different sampling rate, where A-DPS also outperforms all other baselines.\", \"Following the advice of reviewer 3 we have performed statistical tests (i.e. the Student's t-test) to show that the performance gain of A-DPS over DPS in the MRI reconstruction task is statistically significant.\", \"We hope that our changes will be well received and look forward to your final decision on our manuscript.\"]}",
"{\"title\": \"Comment regarding comparisons with other active sampling strategies\", \"comment\": \"We would like to address the reviewers' comment regarding comparisons with other active sampling strategies. In the past two weeks we have done our utmost to also implement such an active sampling baseline.\\n\\nAt the time of submission and the ICLR deadline, unfortunately no public code of other active sampling methods was available. By now, besides ours, 2 other concurrent code repo's have been released: Zhang et al., 2019 together with Pineda et al., 2020 (see https://github.com/facebookresearch/active-mri-acquisition) and Bakker et al., 2020 (see https://github.com/Timsey/pg_mri). The corresponding paper of the latter was even only published on the 30th of October. Despite our efforts across the past 2 weeks, we did not manage to achieve a functional reproduction and comparison with these concurrent releases. We really hope that the reviewer understands - we truly did our best in the given time. Future work should include such baselines, and the fact that these repositories have only become available very recently actually highlights the timeliness of the current paper. \\n\\nWe have however compared A-DPS to a plethora of non-active baselines, all of which are included in our code, which is available from here: https://drive.google.com/file/d/1HF6OtEpzcIPB4UOrS4kR3BQdv1pOsuEh/view?usp=sharing. We hope that such open code sharing facilitates reproducibility and ease of comparison between the implemented non-active baselines and our active acquisition frameworks in the future.\"}",
"{\"title\": \"Reply to review #3\", \"comment\": \"We would first like to thank the reviewer for the constructive feedback and suggestions. We will use this to strengthen our paper across this discussion period. In the following we already provide an initial reply to the questions and raised concerns:\\n\\n[Main review]:\\nWe agree with the reviewer that the paper would improve by providing more substantiation for our method. We follow the reviewers advice and will do our utmost to further enhance the results section through comparison and in-depth model insight.\\n\\n[Comment 1]:\\nWe agree with the reviewer that the introduction section focuses a lot on the context and motivation of non-adaptive sampling. We therefore will update the original introduction and related work sections in order to discuss the need for, and advantages of adaptive acquisition methods in more detail.\\n\\n[Comment 2]:\\nTo properly answer this question, we must clarify that $f_\\\\theta(.)$ is a deep neural network which consists (among others) of an LSTM cell. This is opposed to the understanding of the referee that our entire model behaves as an LSTM cell. Our framework indeed encapsulates recurrency, but it's not an LSTM cell in itself. We hope this clarifies the raised question. To clarify this concept in the revised manuscript, we will change the text surrounding equations (6) and (7). Explaining in more detail the nature of $f_\\\\theta(.)$ and $g_\\\\kappa(.)$ and how the LSTM fits into them.\\n\\n[Comment 3]:\\nWe agree with the reviewer that it is indeed prudent to test whether the improvements gained by A-DPS over DPS on MRI reconstruction are statistically significant, and we will update the revised manuscript with such a statistical comparison.\"}",
"{\"title\": \"Reply to review #2\", \"comment\": \"We would first like to thank the reviewer for the constructive feedback and suggestions. We will use this to strengthen our paper across this discussion period. In the following we already provide an initial reply to the questions and raised concerns:\\n\\n[Novelty]:\\nActive sampling is receiving increasing attention in the research community [1-3], illustrating the non-trivial nature of the problem. The extension from DPS to A-DPS is indeed not a large methodological leap, but provides an effective active acquisition framework nonetheless, and on established ground. Adaptivity brings us one step closer to the theoretical optimum of sub-sampling, and it is certainly worth studying the effect of this improvement in isolation. And although there is value in completely novel frameworks too, an isolated adjustment provides greater clarity in the relative impact of the improvement. Moreover, this leads to a simple method that could facilitate straightforward adoption.\\n\\nTo better emphasize this, we will update the manuscript by adding a more extensive discussion in the introduction explaining the use cases for which such an active acquisition framework is desirable over static acquisition as learned by DPS.\\n\\n[1] Zizhao Zhang, Adriana Romero, Matthew J. Muckley, Pascal Vincent, Lin Yang, and Michal\\nDrozdzal. Reducing Uncertainty in Undersampled MRI Reconstruction with Active Acquisition. 2019.\\n\\n[2] Kyong Hwan Jin, Michael Unser, and Kwang Moo Yi. Self-Supervised Deep Active Accelerated\\nMRI. 2019.\\n\\n[3] Tim Bakker, Herke van Hoof, Max Welling. Experimental design for MRI by greedy policy search. 2020.\\n\\n[Validation]:\\nWe follow the reviewers advice and will do our utmost to further enhance the results section through comparison and in-depth model insight.\\n\\n[Clarity]: \\nWe thank the reviewer for this careful assessment and recommendation - we agree that the toy example might distract from the main focus of the paper and will therefore remove it in the revised version.\\n\\n[Impact]:\\nTo enable better assessment of the improvement with active sampling we will test the statistical significance of the performance gains in our revised manuscript.\\n\\n[Limitations]:\", \"the_reviewer_is_correct\": \"indeed A-DPS selects rows from an existing matrix. This is per design and actually a key strength of A-DPS, allowing for immediate hardware implementations that direct reduce the number of samples taken at the sensing side.\\n\\n[Comment 1]:\\nWe thank the reviewer for raising this paper to our awareness, which we will include in our revision. \\n\\n[Comment 2]:\\nWe are unsure which nonlinearity the reviewer aims at. The only non-linearity in the forward model is the magnitude operator, which is repeated in equation (9). We would of course be happy to further discuss this across the next days. Thanks!\"}",
"{\"title\": \"Reply to review #1\", \"comment\": \"We would first like to thank the reviewer for the constructive feedback and suggestions. We will use this to strengthen our paper across this discussion period. In the following we already provide an initial reply to the questions and concerns:\\n\\n[Weakness 1]:\\nWe follow the reviewers advice and across the next days we will do our utmost to further enhance the results section through comparison and in-depth model insight.\\n\\n[Weakness 2]:\\nWe follow the reviewers advice and will remove the toy example in the revised paper.\\n\\n[Question 1]:\\nWe agree with the reviewer that this would be an interesting experiment. We thus evaluated the impact of further increasing the subsampling rate in the MNIST example. Reducing the sampling rate to 1\\\\% leads to average accuracies of 63\\\\% and 57\\\\% for A-DPS and DPS, respectively, confirming the trend that A-DPS consistently outperforms DPS in the low sampling rate regime. Note that this sampling is very sparse (only 7 samples in total) explaining the reduced accuracies. We will update Fig. 3a in the revised manuscript by including these new results.\"}",
"{\"title\": \"Limited novelty and validation\", \"review\": \"### Summary\\nThis paper develops methods to perform active subsampling. That is, given some downstream task like classification or image reconstruction, it sequentially selects which elements of an image or signal to sample so as to perform said task. It does so by extending the Deep Probabilistic Subsampling (DPS) method developed by Huijben et al. The proposed method is applied to two problems as well as a simplified, low-resolution MRI reconstruction problem.\\n\\n### Strengths\", \"motivation\": \"Active sampling is an interesting idea that has been around for some time, but was often computationally impractical. Thanks to GPUs and deep learning, active sampling is becoming more practical and its interesting to see new work in this direction.\\n\\n### Weaknesses\", \"novelty\": \"The method is a small extension to the DPS method where the network that selects which rows to samples is conditioned on the existing measurements.\", \"validation\": \"The paper did not compare to any other active sampling strategies. The authors made no effort to replicate existing methods.\", \"clarity\": \"The Markov chain example in section 4.1 was hard to follow and more distracting than informative. The phrase \\\"the task model gets to sample only one position out of every three\\\" reads as if the model is sampling one position out of every three in the sequence. It took some time before I realized this meant that at every position in the sequence it was probing one of the three states.\", \"impact\": \"The results with active sampling were only marginally better than results with a fixed (learned) sampling strategy.\", \"limitations\": \"The method is applicable only to true subsampling problems, not general sensing. That is, one isn't designing the rows of a measurement matrix on the fly but rather selecting which row from an existing matrix (identity in most of the examples) that one would like to sample from.\\n\\n\\n### Recommendation\\nThe paper's presentation could be improved and it is sorely missing comparisons to other active sampling methods. I don't think the papers novelty is enough to overcome these issues and so I do not believe it is ready for publication.\\n\\n### Comments\\nWhile the proposed method was computationally impractical, active sampling was discussed extensively in [A] from a information theoretical perspective.\\n[A] Ji, S., Xue, Y., & Carin, L. (2008). Bayesian compressive sensing. IEEE Transactions on signal processing, 56(6), 2346-2356.\\n\\nBecause of the nonlinearity in the forward model, equation (9) is not actually proximal gradient descent. I believe there's a sign(F^HD\\\\circFX) term missing from the (sub) gradient.\\n\\n### Update\\n\\nI thank the authors for their comprehensive response. While its unfortunate they couldn't compare to any other active methods, the related work and overall clarity of the paper is significantly improved. The t-SNE plots were informative and interesting. While I have reservations about the paper's lack of comparisons, I think its publication still might be a net positive for the research community.\\n\\nI have updated my score.\\n\\n##### Other comments\\nLet A(X)=F^H D\\\\circ F X. The expression A^H(Ax-Y\\\\circ sign(A(x))) is a subgradient of 1/2|| Y - |A(X)|||^2 but A^H(|Ax|-Y) is not. I would avoid calling (9) projected gradient descent as the \\\"gradient\\\" isn't really a gradient.\\n\\n\\\"We have performed a statistical analysis on the performance gains made by A-DPS over DPS in the MRI reconstruction task, concluding that they are indeed statistically significant.\\\" It would be nice to see confidence intervals in Tables 1 and 2.\\n\\n#### Questions/comments that do not effect the review:\\nWhy use an LSTM/any network with memory? It seems the next sample depends on the previous samples, but not their order. The ablation study on pg 6 shows that memory helps (at low sampling rates), but I don't understand the intuition why. Could the LSTM just have more capacity?\", \"typos\": \"\", \"pg_2\": \"\\\"cells.During\\\" space\", \"pg_3\": \"\\\"However, This\\\" capitalization\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"A borderline case?\", \"review\": \"In this paper, the authors consider the problem of compressed sensing where the underlying signal of interest is captured and restored based only on sparse measurements: Specifically, this paper focuses on the scenario of Deep Probabilistic Subsampling (DPS) which finds sparse measurements in the way that the models designed to solve specific learning problems based on these measurements are jointly optimized. The authors extend DPS to a sequential framework that iteratively and actively selects the next measurement points: The proposed approach encodes the information accumulated until a time step into a context vector which is updated, and used in selecting the next point, in an LSTM-like framework (see minor comments below). In the experiments with two toy problems (including MNIST) and an MRI reconstruction problem, the authors demonstrated that the proposed Active DPS (ADPS) outperforms DPS (in toy problems) and three other compressed sensing algorithms (for MRI reconstruction).\", \"i_think_this_paper_makes_a_borderline_case\": [\"DPS provides a framework that combines the compressed sensing part (sparse data acquisition) and the subsequent learning part in an end-to-end manner. This paper contributes by extending DPS into an active/sequential learning framework achieving significant performance gains over DPS (mainly on toy problems. see minor comments below). On the other hand, the proposed approach appears to be incremental: ADPS adds a simple sequential update structure (of a context vector) to DPS, which can be described by only two equations (6 and 7). The simplicity of the changes proposed (over DPS) is not a limitation, but it could be accompanied by an in-depth theoretical analysis, a convincing qualitative discussion or _extensive_ experiments demonstrating the practical relevance of the proposed approach.\", \"Minor comments\", \"Apart from the last one paragraph, the Introduction Section focuses on discussing the context and motivation of Deep Probabilistic Subsampling (DPS). Instead, the authors could use this space to describe and characterize the proposed Active DPS in detail.\", \"I was not sure why the proposed architecture (Figure 1 and equations 6 and 7) is called LSTM, it has a recurrent network structure but I was not able to find any attention (gating) mechanism that characterizes LSTM. Please advise me if I missed anything.\", \"Please test if the improvements gained by ADPS over DPS on MRI reconstruction are statistically significant.\"], \"update\": \"Thank the authors for their responses, clarification, and additional experiments. I read through authors\\u2019 responses and the comments from the other reviewers. I still think this paper makes a borderline case for 1) its technical contribution on extending DPS and thereby achieving significant performance gain on a toy problem and MRI reconstruction tasks, still 2) with limited novelty and room for a more extensive experimental validation (perhaps, beyond MRI). My other concerns on clarity and significance of experiments have been addressed. I would raise my rating to marginally above acceptance threshold (borderline).\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting extension of an existing method with somewhat limited experiments\", \"review\": \"SUMMARY:\\nThe paper at hand deals with compressed sensing (CS) and introduces an extension to deep probabilistic subsampling (DPS) called active deep probabilistic subsampling (A-DPS): instead of learning a sampling pattern that is equal for each element of the dataset, A-DPS adaptively selects entries (of each element) based on the information acquired so far. It is shown that this active sampling increases performance for different tasks: a toy example that aims to demonstrate the benefits of active sampling, a classification task (from subsampled inputs) on the MNIST dataset and a reconstuction task on the NYU fastMRI database of knee MRI volumes.\", \"strengths\": \"1. The paper is very clearly written and very comprehensible. Furthermore, it is very detailed about the experimental setup. I also liked the description of the general framework which thoroughly defines the used notation.\\n2. The idea is well motivated and the approach of selecting samples depending on the previously selected ones makes intuitively sense.\\n3. The results of the experiments on MNIST and the NYU fastMRI data are promising. A plenthora of (non-active) subsampling schemes are benchmarked as well.\", \"weaknesses\": \"1. The greatest weakness of this paper is the missing comparison to other active sub-sampling schemes (Zhang et al., 2019; Jin et al., 2019). It would be nice to see wether the proposed method produces better results than the existing methods. \\n2. I found the toy example very constructed. It is not really easy to understand and does in my opinion not improve the quality of the paper.\", \"questions\": [\"What happens when the MNIST sampling ratio in Figure 3a is further increased? Does A-DPS consistently outperform DPS in low sampling ratio regimes?\"], \"decision\": \"Overall, the paper presents an interesting and novel approach. However, it remains an open question wether the proposed A-DPS scheme performs better than already existing active subsampling schemes. Besides this, the experimental evaluation is solid. I lean towards acceptance.\", \"update_after_rebuttal\": \"I thank the authors for their responses and appreciate the inclusion of some of the requested changes in the paper. However, the paper still misses the comparison to other adaptive methods which is the paper's greatest weakness. Therefore, I decided to keep my score at 6.\", \"minor_remarks\": [\"Caption of Table 1 could use some more spacing\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
b7ZRqEFXdQ | Improving Sequence Generative Adversarial Networks with Feature Statistics Alignment | [
"Yekun Chai",
"Qiyue Yin",
"Junge Zhang"
] | Generative Adversarial Networks (GAN) are facing great challenges in synthesizing sequences of discrete elements, such as mode dropping and unstable training. The binary classifier in the discriminator may limit the capacity of learning signals and thus hinder the advance of adversarial training. To address such issues, apart from the binary classification feedback, we harness a Feature Statistics Alignment (FSA) paradigm to deliver fine-grained signals in the latent high-dimensional representation space. Specifically, FSA forces the mean statistics of the fake data distribution to approach that of real data as close as possible in a finite-dimensional feature space. Experiments on synthetic and real benchmark datasets show the superior performance in quantitative evaluation and demonstrate the effectiveness of our approach to discrete sequence generation. To the best of our knowledge, the proposed architecture is the first that employs feature alignment regularization in the Gumbel-Softmax based GAN framework for sequence generation. | [
"feature statistics alignment",
"signals",
"sequence generation",
"gan",
"great challenges",
"sequences",
"discrete elements",
"mode dropping"
] | Reject | https://openreview.net/pdf?id=b7ZRqEFXdQ | https://openreview.net/forum?id=b7ZRqEFXdQ | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"2OgxtsbfcvU",
"Pqu7TPSxfFN",
"4kmtzJfYcW",
"YZ_lwfy5JwS",
"Olp2qNFDDCq",
"tXxPiFfsuB_",
"GmJZxT7V0pD",
"dF_eK3Sowmp",
"VHkLDQqF-h2",
"2E-Y6UFOVI",
"CnsHXulT1e"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040458382,
1606230544705,
1605935142450,
1605933872928,
1605933284826,
1605932422737,
1605932384246,
1604006125113,
1603901316206,
1603891305586,
1603846885001
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3452/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3452/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3452/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3452/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3452/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3452/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3452/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3452/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3452/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3452/AnonReviewer4"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"The work introduces a method that uses the Feature Statistics Alignment paradigm to improve sequence generation with GANs. The contribution is interesting and novel (although marginally), clarity is also good.\\nHowever the reviewers raised several concerns calling for more comprehensive and thorough evaluation. Experiments show an improvement comparing to selected baselines and the revised paper addressed, at least partially, a serious evaluation concern of one reviewer.\\nAlthough the excellent revision work some important open questions still seem to remain, in particular the choose of alignment metrics and a thorough evaluation.\"}",
"{\"title\": \"Hi reviewers, we are looking forward to your reply!\", \"comment\": \"Hi reviewers, thank you for your comments and helpful advice! Please note that the discussion period would be due on 24 Nov and we would not respond afterward. We would like to discuss if you still have concerns or comments.\"}",
"{\"title\": \"Response to Reviewer4\", \"comment\": \"Thank you very much for your comments. We address your specific questions and comments below:\\n\\n**Questions**\\n\\n1.\\\"The LSTM model gets NLL lower than the real data. This is a clear evidence of overfitting.\\\"\\n\\nIt is worth mentioning that the \\\"real data\\\" in Table 2 are also generated by an LSTM model as the oracle. We were also surprised that the vanilla LSTM could achieve the lowest NLL score when the generated sentence length is 20. However, it performed not as well as the RMC when generating long sentences (with a length of 40). We guess this is due to the fact that the oracle model of synthetic data is a single layer LSTM, which could result in the **LSTM-biased** phenomenon for short text generation (length 20). As for long sentence generation, since the vanilla LSTM could forget some information with a long-term span, which could lead to the performance drop as reported in Table 2. We show that the RMC performs better than LSTM for generating long sequences.\\n\\n2.\\\"NLL_{gen} is missing.\\\"\\n\\nYes, NLL_{gen} could be used to measure the diversity of generated sequences. In our experiments, we just use the synthetic data to validate the effectiveness of our method and do not report the NLL_{gen} metric in the submitted version (following SeqGAN, RankGAN, LeakGAN). Instead, we use NLL_{gen} to measure the generation diversity on real datasets, as shown in Table 3. \\n For clarity, we have reported the NLL_{gen} scores in Appendix C.1 of the revised version. The NLL_{gen} score of our model is similar to that of baseline models.\\n\\n3.\\\"BLEU(F) metric cannot show the diversity of examples\\\" and \\\"BLEU(B) (Zhou et. al, 2020) metric is missing\\\"\\n\\nYes, BLEU(F) is used to reflect the **quality** of samples, whereas NLL_{gen} is applied to indicate the **diversity** of generated sentences. The adoption of BLEU(F) and NLL_{gen} follows the previous work, such as Texygen(Zhu et al., 2018) and RelGAN (Nie et al., 2019). To answer Reviewer4's question, we report the BLEU(B) as the following table. We found that the proposed method achieves similar results in terms of the diversity compared with SAL (Zhou et al., 2020): on the MS COCO dataset, the BLEU(F) score of our method is a bit lower but got better NLL_{gen}; on EMNLP2017 WMT News dataset, our method achieves slightly better than SAL, but a bit worse on NLL_{gen}. But our method achieves a significant improvement on BLEU(F) metrics.\", \"on_the_ms_coco_dataset\": \"| | Bleu-2 (B) | Bleu-3 (B) | Bleu-4 (B) | Bleu-5 (B) | NLL_{gen} | Bleu-2 (F) | Bleu-3 (F) | Bleu-4 (F) | Bleu-5 (F) |\\n|-------------|--------------|--------------|--------------|--------------|------------|--------------|--------------|--------------|--------------|\\n| SAL | **0.724** | **0.503** | **0.313** | **0.198** | 0.873 | 0.785 | 0.581 | 0.362 | 0.227 |\\n| Ours (MSA) | 0.700 | 0.404 | 0.206 | 0.123 | 0.760 | **0.959** | **0.866** | **0.759** | **0.630** |\\n| Ours (MDA) | 0.712 | 0.413 | 0.209 | 0.121 | **0.717** | 0.938 | 0.863 | 0.731 | 0.582 |\", \"on_the_emnlp2017_wmt_news_dataset\": \"| | Bleu-2 (B) | Bleu-3 (B) | Bleu-4 (B) | Bleu-5 (B) | NLL_{gen} | Bleu-2 (F) | Bleu-3 (F) | Bleu-4 (F) | Bleu-5 (F) |\\n|-------------|--------------|--------------|--------------|--------------|------------|--------------|--------------|--------------|--------------|\\n| SAL | 0.726 | 0.431 | 0.232 | 0.123 | **2.578** | 0.788 | 0.523 | 0.281 | 0.149 |\\n| Ours (MSA) | 0.742 | **0.480** | **0.252** | **0.138** | 3.999 | **0.932** | **0.798** | 0.585 | **0.404** |\\n| Ours (MDA) | **0.744** | 0.474 | **0.252** | 0.137 | 2.732 | 0.916 | 0.784 | **0.592** | 0.386 |\\n\\n4.\\\"How is NLL_gen computed?\\\"\\n\\nNLL_{gen} computes the neg-log likelihood of reference samples in the test set by the generator, which is adopted by SAL (Zhou et. al. 2020) and RelGAN (Nie et. al. 2019) to measure the diversity of generated samples. We have clarified this in the paper.\\nTo sum up, we use NLL_{oracle} and BLEU(F) to automatically evaluate the sample quality and employ the NLL_{gen} to evaluate the diversity of generated samples. The proposed model achieves superior performance than previous work, and remain similar diversity of generated sentences. \\n\\nThank you for pointing out the issues of evaluation metrics, and **please let us know if we could address your concerns about evaluation.**\"}",
"{\"title\": \"Response to Reviewer2\", \"comment\": \"Thank you for your detailed comments and reviews. We address your specific questions and comments below:\", \"questions\": \"1.About novelty:\\n\\n**A short answer**: Different motivation and usage. Our paper aims to mitigate the difficulty of the generator training by providing **additional signals** from the discriminator, whereas conventional feature matching serves as the training objective for both the generator and discriminator in language GANs. The FSA technique empirically achieves superior performance on Gumbel-Softmax GANs without introducing extra training parameters.\\n\\n**Detailed answer**:\\nThe paper proposed an FSA mechanism to improve the Gumbel-Softmax-based GAN's training, and achieves the superior performance compared with previous works. It is indeed that the FSA is a form of feature matching. However, previous language GANs using feature matching directly optimize the feature matching metrics, such as TextGANs, FM-GANs, etc. Instead of directly optimizing the feature matching loss for both the generator and discriminator, we use FSA to align the feature distribution and assist the training of only the generator. Besides, the proposed FSA is used to provide fine-grained learning signals for the generator besides the original (coarse) learning signals of language GANs. The idea of FSA could also be regarded as a resemblance to feature leakage in LeakGAN (Guo et. al. 2018), which leaks the features from the discriminator to the generator. \\n\\n2.Thank you for pointing out the mistake, it should be \\\"the lower \\u03c4 could discourage exploration and tend to **exploit** during training\\\".\\n\\n3.Due to the page limit, we do not include the comparison between generated samples and the proposed method. We added it in Table 8 (Appendix E.1) of the revised version:\\n\\n| | Samples |\\n|------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------|\\n| Real |(1) a single kite flies high above a body of water as a person stands on the edge of the water . (2) a man wearing an apron in an industrial kitchen reaching for a pot . |\\n| MLE |(1) a man watches on his bike , in a lake on a field . (2) a women is standing behind an orange table in helmet on a child in the background . |\\n| SeqGAN |(1) some people sitting on top of luggage near a truck . (2) a man sitting in a bath tub on tops . |\\n| TextGAN |(1) a man riding a motorcycle . (2) a bathroom with a sink , and a table . |\\n| LeakGAN |(1) a man standing next to her cell phone on a street sign . (2) a woman is holding a child in the air . |\\n| MaliGAN |(1) a woman is standing and another oak cake on a drain . (2) a man standing in a kitchen with her laptop and two tables |\\n| RankGAN |(1) a colorful bike is is down next to a large mirror . (2) a man is riding a bike down a track . |\\n| RelGAN |(1) a woman walking with a dog in the city in front of a city bus . (2) a man sitting on a bed in a room with a chair on the couch . |\\n| Ours (MSA) |(1) a man is sitting on a motorcycle on a busy street , in a city . (2) a man sitting on a motorcycle on a crowded street near a building , with a bicycle in a parking lot . |\\n| Ours (MDA) |(1) a person is riding a motorcycle on a city street with a woman standing on the back of it . (2) a man with a woman standing next to a fire hydrant wearing a backpack . |\\n\\n\\n\\nWe hope we could address your concerns. **Please feel free to let us know if you still have questions.**\"}",
"{\"title\": \"Response to Reviewer1\", \"comment\": \"Thank you for your helpful reviews. We address your specific questions and comments below:\\n## Question #1\\nWe agree that there are several works in language GANs that employ the temperature annealing in the generator. However, there are only few works, such as RelGAN (Nie et.al. 2019), that leverage Gumbel-Softmax tricks in language GANs on real datasets. By saying \\\"it is under-explored\\\", we mean that there remains an open problem on its improvement. Thank you for pointing out this, we have clarified it in the revised paper.\\n## Question #2\\nThank you for giving the related work of language GANs on dialogue generation. We have added these works on the paper.\\n## Question #3\\nGood question! Indeed, the Texygen benchmark (Zhu et al., 2018) we followed used Self-Bleu to measure the diversity of generated real sentences. However, it is not feasible to use it as the evaluation metric.\\n\\n**A short answer**:\\n\\nThere is a problem when using the official implementation of Self-Bleu, which has been pointed out in previous work (Nie et. al. 2019; Zhou et. al. 2020).\\n\\n**Detailed answer**:\\n\\nPrevious work did not use it since there is an issue in the official implementation (https://github.com/geek-ai/Texygen/blob/master/utils/metrics/SelfBleu.py): the generated sentences changed but the reference remained the same during the evaluation process. It is discussed in the appendix of SAL(Zhou et. al. 2020) and the openreivew of RelGAN paper (Nie et. al. 2019). As reported in SAL and RelGAN, our self-BLEU (2-5) scores are also always 1. Thus we do not use the Self-BLEU metric.\\n\\nFor reference, we copy it here:\\n\\n'''Note that many previous works use self-BLEU Zhu et al. (2018) as a diversity metric. However, we find that there exists a problem in the official implementation of the self-BLEU metric: Only in the first time of evaluation that the reference and hypothesis come from the same \\u201ctest data\\u201d (i.e. the set of generated sentences). After that, the hypothesis keeps updated but the reference remains unchanged (due to \\u201cis-first=False\\u201d), which means hypothesis and reference are not from the same \\u201ctest data\\u201d anymore, and thus the scores obtained under this implementation are not self-BLEU scores. To this end, we modified the implementation to make sure that the hypothesis and reference are always from the same \\u201ctest data\\u201d (by simply removing the variables \\\"self.reference\\\" and \\\"self.is-first\\\") and found that the self-BLEU (2-5) scores are always 1 when evaluating all the models. This problem is also discussed in the openreview of the RelGAN paper.''' (Zhou et. al. 2020)\\n\\n## Question #4\\nGood point! Yes, it would be helpful to add a TSNE visualization for latent features with and without FSA. We appreciate your notification and would try to add it in the future version. \\n\\nThank you for your helpful suggestions and comments! **Please let us know if you still have some concerns or questions.**\"}",
"{\"title\": \"Response to Reviewer3 (Part 1)\", \"comment\": \"Thank you for your helpful reviews. We address your specific questions and comments below:\", \"questions\": \"1.\\\"It lacks novelty as features matching, relativistic discrimination or Gumbel-Softmax are not new ideas.\\\"\\n\\n**A short answer**: Different motivation and usage. Our paper aims to mitigate the difficulty of the generator training by providing **additional signals** from the discriminator, whereas conventional feature matching serves as the training objective for both the generator and discriminator in language GANs. The FSA technique empirically achieves superior performance on Gumbel-Softmax GANs without introducing extra training parameters.\\n\\n**Detailed answer**:\\nThe paper proposes the FSA mechanism to improve the Gumbel-Softmax-based GAN's training and achieves superior performance compared with previous works. It is indeed that the FSA is a form of feature matching. However, previous language GANs using feature matching directly optimize the feature matching metrics, such as TextGANs, FM-GANs, etc. Instead of directly optimizing the feature matching loss for both the generator and discriminator, we use FSA to align the feature distribution and assist the training of only the generator. Besides, the proposed FSA is used to provide fine-grained learning signals for the generator besides the original (coarse) learning signals of language GANs. The idea of FSA could also be regarded as a resemblance to feature leakage in LeakGAN (Guo et. al. 2018), which leaks the features from the discriminator to the generator. \\n\\n2.\\\" it appears that they lack semantic meaningfulness especially for long sentences (see for instance Table 8)\\\"\\n\\nYes, there may be some unmeaningful sentences in the randomly generated samples, which is common in most language GANs because it is difficult for generating long sentenced. We resampled the generated samples with the model of the best quality. Please see Table 10 in Appendix E.2.\\n\\n3.\\\"MSA and MDA encode the same matching up to a power 2. It\\u2019s unclear why they lead to different empirical results.\\\"\\n\\nWe empirically find that MSA prefers to generate longer sentences but may fall into the mode dropping. We guess this is because of the gradient difference between MDA and MSA. For MDA, the gradient of an absolute value is $\\\\pm 1$, which is a bit difficult to converge to the local optimum. In contrast, the gradient of MSA has a more flexible range of values: it gets smaller when the predicted loss becomes smaller, pushing it faster to converge to the local minimum. As a result, the convergence on the local minimum may not perform very well on diversity.\\n\\n4.\\\"Using MMD or Wasserstein distance\\\"\\n\\nGood point! It is a great idea to investigate whether feature matching metrics can be beneficial when substituting FSA. Our initial idea is to align the lower-degree statistics of feature representations. The exploration of unified feature matching methods can be left for future work.\\n\\n5.\\\"It should be clarified earlier in the paper that the used features are extracted from the discrimination network (as the weights between the discriminator and feature extractor are shared)\\\"\\n\\nWe moved this part to Sec. 3.1.\\n\\n6.\\\"Which layer of the discrimination network the features are extracted\\\"\\nAs in Sec. 3.2, we denote H(.) as the non-transformed layer before the non-linearity. Thus the feature extractor H(.) is the last layer before the activation function. We have made it more clear in Sec. 3.2,\\n\\n7.\\\"Question of how reliable is the human evaluation score\\\"\\n\\nHuman evaluation is used to measure the quality (i.e., acceptance, grammaticality) rather than the diversity. We perturbed the sentences and anonymized the model's identity before the evaluation.\"}",
"{\"title\": \"Response to Reviewer3 (Part 2)\", \"comment\": \"8.\\\"MSA and MDA achieve higher scores than real sentences.\\\" \\n\\nThis is because MSA and MDA tend to generate longer grammatically correct sentences, whereas the MSA tends to fall into limited patterns (but still with good quality). The quality of reference captions in MS COCO Image Caption datasets is various, in which some sentences would be single phrases. It makes sense to get a higher score if the model tends to generate sentences with a longer length and better grammaticality.\\n\\n9.\\\"The term mean is over-used\\\"\\n\\nThank you for pointing out this detail! We rewrote the description before Eq.(2) for clarity.\\n\\n10.\\\"In which sense FSA induces a dynamic regularization?\\\"\\n\\nFrom the initial idea of our paper, the FSA serves to provide fine-grained learning signals during the training process, besides the original GAN's signal. From the perspective of loss terms, the FSA could be regarded as a constraint that dynamically modulates the feature alignments between real and generated feature representations. To avoid misinterpretation, we have removed this claim.\\n\\n11.\\\"In Equation (10) the function \\u201cone_hot\\u201d should be defined. y_i should read the |V|-dimensional vector y\\\"\\n\\nThank you for your carefulness. We have modified this in the revised paper.\\n\\n12.\\\"Equation (11) and Algorithm 1\\\".\\n\\nWe have modified them according to your paper. Thank you very much for the comments!\\n\\nWe appreciate for providing the detailed comments and have modified them according to your reviews. **Please feel free to let us know if we could address your concerns**.\"}",
"{\"title\": \"The paper proposes an improvement to sequence generative adversarial networks (GAN) by combining Gumbel-Softmax based GAN with the matching of mean representations of true and generated samples in a latent feature space. Experimental evaluations on synthetic and real datasets show the effectiveness of the method.\", \"review\": \"\", \"summary\": \"The paper proposes an improvement to sequence generative adversarial networks (GAN) to cope with the common training issues of GANs. For the sake, the paper combines Gumbel-Softmax based GAN, relativistic discrimination function with the matching of mean representations of true and generated samples in a latent feature space. This feature statistics alignment allows to leak information from the discriminator to the generator as the used features are extracted from the discriminator network. Experimental evaluations on synthetic and real datasets show the improvement achieved by the proposed method over existing sequence generation networks.\", \"reasons_for_score\": \"The paper straightforwardly combines existing procedures (relativistic discriminator, Gumbel-Softmax approximation for categorical distribution, features matching) to improved upon vanilla sequence generation networks and somehow lacks novelty. Although the ablation study is interesting and shows the improvement brought by each module, examples of lengthy generated sequences illustrate that the sentences produced by the GAN are not semantically meaningful.\", \"pros\": [\"Overall, the paper is well written. In particular, the rationale behind the proposed method is justified. Empirical evaluations support these intuitions and show how they contribute to the observed quality of the generated sequences.\", \"The paper aims at addressing a major issue in training GAN for sequence generation: how to strengthen the learning of the generator compared to the discrimination network which is easier to train? The approach promoted in the paper consists to align the mean statistics of true sequences and fake ones. Specifically, the statistics are computed over features extracted from discrimination network. The objective function of the generator is therefore composed of the usual GAN loss term and the distance between those mean representations. As such, this idea of guiding the generation network with information from the discriminator is interesting and plays a key role in the performances improvement.\", \"In the same vein, the use of Gumbel-Softmax distribution (instead of the discrete distribution) and of the relativistic discrimination function (instead of the classical classification function) helps to learn a better generation model. However these ideas are not novel and were investigated separately in previous research works.\", \"Experimental evaluations, including both qualitative analysis and quantitative results, are provided in the paper and in the supplementary to show the effectiveness of the proposed framework. The newly proposed GAN achieve superior performances. The comprehensive ablation study is interesting and helps to understand how each module (feature alignment, Gumbel-Softmax, batch size) contributes to the enhanced performances.\"], \"cons\": [\"Although the proposed method, according to the empirical results, show improved performances, it lacks novelty as features matching, relativistic discrimination or Gumbel-Softmax are not new ideas. The main contribution resides in the better quantitative results compared to existing sequence generation networks. However when one examines the generated sentences, it appears that they lack semantic meaningfulness especially for long sentences (see for instance Table 8). This shows that the proposed GAN (as well as the competitors) is not effective yet.\", \"Features distribution alignment is an interesting way to measure how close are the marginal distributions of the real and fake sequences. The paper considers the Mean Distance Alignment (MDA) and the Mean Square alignment (MSA) which are respectively the distance and the squared distance between the mean latent representations of the real and generated sequences. Several comments can be made as hereafter.\", \"MSA and MDA encode the same matching up to a power 2. It\\u2019s unclear why they lead to different empirical results.\", \"Instead of matching only the mean statistics, the overall distributions of the latent representations can be aligned by considering metrics such as MMD or Wasserstein distance. How would the results look like in that setting?\", \"It should be clarified earlier in the paper that the used features are extracted from the discrimination network (as the weights between the discriminator and feature extractor are shared). Also the paper should make explicit from which layer of the discrimination network the features are extracted.\", \"The findings of human evaluation (see Table 5) are not unequivocal. MSA and MDA achieve higher scores than the real sentences. The best model, the one with MSA, is not preferred because of a lack of diversity and quality. This raises the question of how reliable is the human evaluation score.\"], \"other_comments\": [\"Page 3, definition of MSA: in the sentence \\u201cmean squared difference between the centroids...\\u201d, the term mean is over-used as Eq. (2) or (3) represents only the squared distance between centroids.\", \"Page 4: in \\u201cThe FSA term on the RHS can also be regarded as a dynamic regularizer for the sequence generator\\u201d the notion of dynamic regularizer is unclear. In which sense FSA induces a dynamic regularization?\", \"In Equation (10) the function \\u201cone_hot\\u201d should be defined. Also in (10), I think $y_i$ should read the $|V|$-dimensional vector $y$.\", \"Equation (11) is to be checked carefully as the parameter $\\\\tau$ simplifies in numerator and denominator.\", \"Algorithm 1: the update of the generator $G_\\\\theta$ requires a minibatch from real dataset in order to minimize $L_{RG}$ + $L_{FSA}$ as $L_{FSA}$ relies on the mean of real data latent representation.\", \"After rebuttal\", \"I read the response of the authors. The spotted typos are fixed in the revision. Some questions/concerns have been tentatively. However the novelty in the paper is still not blatant or how the use of distance such as MMD or Wasserstein to match the features is under-explored. Hence I intend to keep my rating.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting paper\", \"review\": \"[Summary]\\nThis paper proposes a new GAN-based text generation method that incorporates feature statistics alignment and gumbel-softmax for reparameterization to deal with mode collapse and unstable training. For feature statistics alignment, the authors design two methods such as mean square and mean distance alignments. They evaluate the proposed method on a synthetic dataset, MS COCO caption, and EMNLP2017 WMT news dataset, comparing them with RL-based and non RL-based models. With extensive experiments including ablation studies, the proposed method show promising results.\\n\\n[Recommendation]\\nOverall, this paper is clear and well-written. So I lean to acceptance. But I have some concerns as well.\\n\\n[Strength]\\n- Mode collapse is challenging issue in GAN training.\\n- Text generation is important problem.\\n\\n[Weakness]\\n- The authors insist the use of Gumbel-softmax in GAN tranining is under-explored. But It is not clear. There are more method using Gumbel-softmax [Gu et al. 2019] and a similar softmax with temperature annealing. It is not clear for the authors to explicitly discriminate using Gumbel-softmax and other smoothed softmax methods. \\n- Some related work were missed such as DialogWAE [Gu et al. 2019] and ARAML [Ke et al. 2019]. In particular, DialogWAE uses GAN and Gumbel-softmax for text generation even if it focuses on dialog generation.\\n- For verifying mode collapse issues, how about using Self-BLUE in addition to BLUE scores as a metric to evaluate the diversity? \\n- Novelty might be incremental. It seems that the novelty is from using feature statistics alignment. To emphasize the contribution of feature statistics, comparing between the latent feature visualization with and without FSA might be helpful in addition to ablation study. \\n\\n[Minor] \\nIn p3, the given real data is --> are\\n\\n\\n[Gu et al. 2019] DialogWAE: Multimodal Response Generation with Conditional Wasserstein Auto-Encoder. ICLR 2019.\\n[Ke et al. 2019] ARAML: A Stable Adversarial Training Framework for Text Generation. EMNLP 2019.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Official Blind Review #2\", \"review\": \"Summary:\\n\\nThe paper addresses the task of improving GANs for sequence generation and proposed a method based on the relativistic discriminator. The proposed method employs a Feature Statistics Alignment (FSA) paradigm to reduce the gap between real and generated data distributions. It relies on the relativistic discriminator for \\\"coarse\\\" differences and FSA for \\\"fine-grained\\\" differences between real and generated data distributions. It is evaluated on synthetic and real datasets, and it significantly outperforms the baselines. It also outperforms baselines on human evaluation based on the acceptance, grammaticality, and meaningfulness of the generated sentences.\", \"strengths\": \"The proposed approach is very effective, as demonstrated by significant performance improvements in the experiments across synthetic and real datasets. Also, it can generate better sentences compared to the baselines, as shown by human evaluation.\", \"weakness\": \"Although the proposed model is thoroughly evaluated and empirically effective, it is not very different from existing methods, except for FSA. The application of FSA in this context might be novel; however, the proposed approach seems to be a simple combination of two existing approaches. Therefore, the novelty of the model is limited.\", \"minor_comments\": \"1. Correction: Did you mean\\n\\nIn contrast, the lower \\u03c4 could discourage exploration and tend to explore during training. -> In contrast, the lower \\u03c4 could discourage exploration and tend to exploit during training. ?\\n\\n2. It would have been good to see a head-on comparison of the generated samples (baseline vs. proposed approach) in the paper's main text.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting idea but flawed experiments\", \"review\": \"**Main Claim:**\\n\\nIn this work, the authors propose to use the Feature Statistics Alignment paradigm to enrich the learning signal from the discriminator in a sentence generation GAN. The proposed model can generate sentences with better likelihood and BLEU on one synthetic and two real datasets.\\n\\n**Contributions:**\\n\\nThis work introduces an novel and interesting idea of Feature Statistics Alignment in training GANs. \\n\\nThe authors follow the convention in this domain, and evaluate the model on three datasets.\\n\\nThe experiment results show that the proposed model outperforms existing models. However, the authors need to clarify some details to make the results trustworthy (see weakness). \\n\\n\\n**Strong points:**\\n\\nThe idea is novel and interesting. \\n\\nThe model and training procedure is clearly explained. Related works are cited well. \\n\\n\\n**Weak points:**\", \"in_table_2\": [\"The LSTM model gets NLL lower than the real data. This is a clear evidence of overfitting.\", \"In SAL (Zhou et. al, 2020), NLL_{gen} is used to evaluate the diversity of the generator. But this metric is missing here without explanation.\"], \"in_table_3\": \"- The BLEU metric in this paper is the BLEU(F) metric in SAL (Zhou et. al, 2020). This metric evaluates the generated sentences using the test set as a reference. Thus the BLEU(F) metric cannot show the diversity of examples. \\n- The BLEU(B) (Zhou et. al, 2020) metric is missing. BLEU (B) metric evaluates the test set using the generated sentences as a reference, so it can detect mode collapse of a generative model. \\n\\nIn section 4, although authors clearly cite previous works for experiment settings, I think it\\u2019s worthwhile to repeat the definition of each metric, and some other key points in the paper, so that readers can easily understand the notations and jargons in this section. \\n\\n**Recommendation:**\\nReject. \\n\\nThere\\u2019s a major flaw in the evaluation metrics. On both synthetic and real datasets, the evaluation metrics prefers overfitted models, i.e. if the model can remember one example from the training set, and repeat that sentence, it can get a very high score. \\n\\nI will reconsider my recommendation if (1) I miss interpret the metrics or (2) the authors provide more evidence on the diversity of the generated sentences, for example showing the NLL_{gen} metric on the synthetic dataset, and BLEU(B) metric on real datasets. \\n\\n**Questions:**\\n\\nHow is NLL_gen computed?\\n\\n**After Rebuttal**\\nThe author's reply partially resolved my concerns, although the diversity of models has not improved, nor has it significantly decreased. Thus I have increased my score from 3 to 4.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
eIPsmKwTrIe | Using Deep Reinforcement Learning to Train and Evaluate Instructional Sequencing Policies for an Intelligent Tutoring System | [
"Jithendaraa Subramanian",
"David Mostow"
] | We present STEP, a novel Deep Reinforcement Learning solution to the problem of learning instructional sequencing. STEP has three components: 1. Simulate the student by fitting a knowledge tracing model to data logged by an intelligent tutoring system. 2. Train instructional sequencing policies by using Proximal Policy Optimization. 3. Evaluate the learned instructional policies by estimating their local and global impact on learning gains. STEP leverages the student model by representing the student’s knowledge state as a probability vector of knowing each skill and using the student’s estimated learning gains as its reward function to evaluate candidate policies. A learned policy represents a mapping from each state to an action that maximizes the reward, i.e. the upward distance to the next state in the multi-dimensional space. We use STEP to discover and evaluate potential improvements to a literacy and numeracy tutor used by hundreds of children in Tanzania. | [
"Deep Reinforcement Learning",
"Intelligent Tutoring Systems",
"Adaptive policy",
"Instructional Sequencing"
] | Reject | https://openreview.net/pdf?id=eIPsmKwTrIe | https://openreview.net/forum?id=eIPsmKwTrIe | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"cOaZWbvN2Hw",
"Z3zduk8Y3H",
"hY1mH9b_FTu",
"HgigmCH17ll",
"v0KWedwiPC6",
"UmlMS4uXOx_",
"5YkkST5sn0"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040513682,
1605242227482,
1605242185907,
1605242120256,
1603908561900,
1603762354556,
1603663091387
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3451/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3451/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3451/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3451/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3451/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3451/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This paper introduces an important and interesting problem and a potentially interesting approach. Unfortunately, the reviewers agree that the current version isn't appropriate for ICLR in its current form. However, hopefully the feedback will be useful for the authors in revising and resubmitting this paper to another venue.\"}",
"{\"title\": \"Thanks for helpful reviews\", \"comment\": \"AnonReviewer3 - thanks a lot for your valuable reviews/feedback. We shall keep these comments in mind as we try to improve our work in the future.\"}",
"{\"title\": \"Thanks for helpful reviews\", \"comment\": \"AnonReviewer1 - thanks a lot for your valuable reviews/feedback. We shall keep these comments in mind as we try to improve our work in the future.\"}",
"{\"title\": \"Thanks for helpful reviews\", \"comment\": \"AnonReviewer2 - thanks a lot for your valuable reviews/feedback. We shall keep these comments in mind as we try to improve our work in the future.\"}",
"{\"title\": \"Paper is not about representation learning and does not motivate differences from related work\", \"review\": \"The paper intends to contribute \\u201ca novel framework for optimizing instructional sequencing in ain intelligent tutoring system\\u201d. More specifically, this framework uses deep reinforcement learning and evaluates learned policies on historical data.\\n\\nA major strength of this paper is working with real human data from an application with obvious positive human impact. That working with this rich data comes necessarily with only working with a small amount of data is understandable, and it is not a weakness of the paper.\\n\\nThe most significant weakness of the paper is that it does not articulate a contribution that centers on representation learning -- the focus of this conference. A representation of student knowledge over time is learned (the 118-parameter HOT-DINA model), but this representation was already contributed by past researchers and does not seem to match the intended contribution of the paper. An action policy for controlling tutor behavior as a function of student knowledge state is also learned, but policy (or its internal details) are not examined from a representation learning standpoint. (For example, does some aspect of the different learned student-specific policies appear to recover/mimic another known aspect of those student identities? If so, would that be a good or bad result?)\\n\\nThe next most significant weakness is that when the paper makes novel choices, those choices are neither evaluated nor strongly motivated based on the past literature. Why PPO over DQN? (PPO can work in certain settings that DQN cannot, but past work already demonstrated DQN for a very similar setting.) Why HOT-DINA over BKT? (HOT-DINA is a much more expressive model, but the small amount of valuable historical data for this setting may limit the effectiveness of expressive models due to overfitting.)\\n\\nAdditional weaknesses are noted in the additional section-by-section feedback below.\\n\\nThis reviewer recommends (2) strong rejection. This is not inherently a paper about representation learning, and, even as a generic applied machine learning paper, it does not sufficiently motivate or evaluate the intended contribution of \\u201ca novel framework\\u201d for using machine learning in a specific application.\", \"questions_for_the_authors\": [\"Are there structural reasons why Q learning (e.g. DQN) cannot be applied in this setting?\", \"Is there a way to verify that HOT-DINA is not overfitting in a way that makes evaluating the system on the same historical information used to train it unsound?\"], \"section_by_section_notes\": [\"Title\", \"Focus on \\u201cinstructional sequence policies\\u201d (for ITS, using RL), was hoping for something that sounded immediately relevant to representation learning.\", \"Abstract\", \"It sounds like the method might not be as important as the application.\", \"Hopefully we\\u2019ll hear more about the representation learned because this is an ICLR submission.\", \"Introduction\", \"The introduction doesn\\u2019t state the intended conclusion or motivate the novel parts of the work for the reader.\", \"There\\u2019s a learned policy in here, but what is the learned representation you want to highlight for this specific venue on representation learning (ICLR)?\", \"Simulating\", \"This is how you fit a pre-existing representation to new data, it doesn\\u2019t seem like this is the contribution of the paper.\", \"Training\", \"Missing use of domain knowledge:\", \"Is the the 100 timesteps considered in PPO training based on 100 being a typical number of interaction steps with the ITS among the Tanzanian students?\", \"Even with a finite horizon, the rate at which students decide to exit the activity could be used to motivate a discount factor < 1. Presumably you have this information as well.\", \"Notes like \\u201cUseful to mention this?\\u201d suggest the paper was submitted in an incomplete state.\", \"It\\u2019s interesting to list the design alternatives for representing actions, but each should be contextualized with references to past research that used something similar.\", \"Evaluating\", \"The evaluation in terms of local and global impact is unfamiliar to this reviewer (who knows other RL+ITS work). Not enough information is given to pin down exactly what local impact measures.\", \"What is the source and meaning of the historical baseline number?\", \"Figure 3 is too busy to interpret. Consider presenting it as a chart that aggregates across students using a single line (plus error bars) to represent the mean (plus stddev) state over time for the two policies.\", \"The way student-specific models are \\u201cevaluated against historical data\\u201d on those specific students suggests we are just testing on the training data. Why is this a valid methodology for this application? Cite past work on evaluating RL-based ITS systems to motivate your methods.\", \"Relationship\", \"Consider covering work by others much much earlier in the paper so that the reader can understand why you made the choice you did (and that you can convince them that you know RL has been applied to ITS across many decades previously).\", \"Near \\u201cSTEP uses a more powerful deep RL method\\u201d -- it is true that policy gradient can be applied in certain applications where Q-learning cannot, but if we aren\\u2019t given a note as to whether this is the case in the current application. Based on previous work, it seems like Q learning was applicable. Thus, using policy gradient methods (including PPO) would seem to add needless complication.\", \"This section indeed state how the current work is different from past work, but it does not motivate the differences. If others were successful with different methods, why change them for this paper?\", \"Conclusion\", \"\\u201cThis paper contributes a novel framework\\u201d -- what is novel is the combination of the HOT-DINA student knowledge model with the PPO reinforcement learning approach (within a larger framework shared by many other papers).\"], \"rating\": \"2: Strong rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Good idea and the results are showing promise, but the paper is not ready for publication in its current form.\", \"review\": \"I really like the direction this paper is going in but the results as currently reported seem rushed and under-analyzed. The paper is shorter than the max length of ICLR papers yet omits crucial details about the action selection strategies and presents the empirical gains with charts that lack labels and no textual analysis (just graphs without context) of the individual student trajectories. There is also no comparison to the existing state-of-the-art from Shen et al. (cited). Also, the empirical study seems to have been done on the same 8 students on which the model was calibrated, which likely caused significant overfitting and puts the generality of the empirical results in doubt. Finally, much of the IRT terminology is not defined until later in the paper and the related work on RL for intelligent tutoring systems could use some additions (mentioned below). In summary, this is a good idea and the results are showing promise, but the paper is not ready for publication in its current form.\", \"details\": \"My biggest concern with the paper is the lack of rigor in the empirical analysis. First and foremost, it appears the data from the same 8 students were used for calibrating the model (both the knowledge tracing and the PPO decision-making training) and then those same students were considered in the simulation (testing) of the models. It seems likely the models were overfit to the data from these students. A proper empirical study in this kind of educational setting needs to train the models on data from one set of students and then use a holdout set to test it.\\n\\nNext, the metrics and charts reported in the paper do not make the improvement clear. The \\u201clocal impact\\u201d metric reported in the paper does not seem like a good way of assessing improvement since (as the authors admit) it multi-counts improvements from one step in subsequent steps. I suggest removing that metric for the more grounded global metric. The charts reported in Figure 3 are not understandable or analyzed in the text. Why are some of the sub-plots blank? And where are the labels saying which chart is associated with each of the 4 variants? We can\\u2019t tell which problem setting matches each chart. Finally, while the green line (new method) outperforms the baseline in the majority of charts, there are some where it does not. But no analysis or explanations are provided despite there being plenty of room in the paper.\\n\\nFinally on the empirical side, the authors reference the work of Shen et al. who applied DQN to this ITS problem, but the authors do not provide an empirical comparison to this DQN based approach. Since that is the state of the art, it seems necessary to apply that algorithm here and see if the gains over the baseline are comparable to the new algorithm.\\n\\nOn the algorithm side, while most of the approach is fairly clear, the description of the \\u201cType 1 agent\\u201d omits crucial details about how items or subject areas are actually selected. Unlike the other 3 cases, where actions are clearly related to items, Type 1 has an action that moves a threshold. But how is that helpful? Can\\u2019t an agent just move the threshold very low so it thinks all students have mastered all skills? And how does moving a threshold determine an item to be given to a student. More detail is needed to understand this case, which seems much different from the other 3.\\n\\nOn terminology, the paper often uses terms (such as (Guess, Slip, Learn\\u2026). on page 3 or \\u201cb\\u201d in the last paragraph of page 3)or presents results (for instance the thetas in table 1) before the definitions of these terms. Since readers at ICLR are unlikely to have an IRT background, the definitions on page 4 need to be moved up to a terminology section towards the beginning.\\nOn related work, the paper did a good job referencing several very recent papers but failed to reference some of the papers related to core concepts and also lacks references to other slightly older works that used the same mix of student knowledge tracing with offline RL.\", \"examples\": \"HOT-DINA first appears on page 3 but no citation is given\\nItem response theory is mentioned on page 3 with no citation\\n\\nOther RL for tutoring systems work;\\n\\u201cCognitive modeling to represent growth (learning) using Markov decision processes\\u201d \\u2013 builds a Bayesian representation of student skills and then uses a POMDP to plan in the belief space over skills (similar to the current work\\u2019s representation)\\n\\n\\u201cLearning a Skill-Teaching Curriculum with Dynamic Bayes Nets\\u201d \\u2013 similar to current work, calibrates a Bayes Net based on student data and uses RL to create new policies.\\n\\nThe last sentence of the first paragraph of section 3.1 seems to be an author\\u2019s note or question to co-authors.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Needs improvement in empirical rigour, novelty, clarity, and reproducibility. Recommend rejection.\", \"review\": \"1. Summary of paper:\\n\\t\\ta. The paper contributes a deep RL approach to learning instructional sequencing. The approach called STEP starts by simulating tutor and student models. The tutor simulation is based on the RoboTutor ITS and the student simulations are fit to historical data of children interacting with RoboTutor using the HOT-DINA approach. The instruction sequencing model is then trained using PPO and evaluated using novel measures of sequencing decision impact on local and global learning gains (estimated from running student simulation). The paper contributes one set of experimental comparisons between four variations of the sequencing agent and the RoboTutor baseline in 8 runs of the two systems.\\n\\t2. Strengths\\n\\t\\ta. At a high level, the proposed approach is well motivated, using RL to optimise parameters in the existing tutoring system or more granularly make sequential decisions about what activity to provide to the student.\\n\\t\\tb. The local and global reward design is a strong contribution that uses historical data and the student simulation (knowledge tracing) to estimate credit for policy decisions in a counterfactual manner.\\n\\t\\tc. The future work discusses the limitation of not using actual children's scores to evaluate this learning model.\\n\\t3. Weaknesses\\n\\t\\ta. Novelty/impact and context within related literature: \\n\\t\\t\\ti. It is difficult to judge the novelty of this work since the related work section is too brief and does not actually describe several works compared against. Additionally, table 4 is not descriptive enough, has potentially relevant work undescribed (Whitehall & Movellan's 2017 POMDP/policy gradient approach), and has work referenced previously missing from it (Yudelson et al. 2013, Pardos & Heffernan 2011).\\n\\t\\tb. Experimental rigour:\\n\\t\\t\\ti. Two strong claims are made in the related work section that do not have sufficient experimental evidence. 1) A direct comparison of BKT to HOT-DINA for modelling student knowledge gains is required to show the impact of this central claimed contribution. 2) A direct comparison against Shen et al. 2018 is required to show that PPO is indeed more effective in this domain. This is also important since it is a central claimed contribution.\\n\\t\\tc. Clarity: The clarity of the paper needs significant effort to improve. Instances below.\\n\\t\\t\\ti. The structure of the paper reads like a technical report rather than an empirical investigation. The contents would be far easier to understand with a different structure emphasising a research question, background to understand/motivate it, methodology to answer it, results, and discussion.\\n\\t\\t\\tii. Several sections are far clearer in the supplementary data document. The reproducibility of the paper is boosted by this. Given the extra space available, several sections could stand to be transferred to the main paper. My original review did not include supplementary data and points around describing data and examples are boosted by adding them to the main paper.\\n\\t\\t\\tiii. Until significant rereads, it isn't clear what the relationship is between RoboTutor and STEP/this work. With my current understanding, RoboTutor is an external system that has been used to collect data on children learning various skills using it. This data was then used to evaluate STEP in terms of estimated learning gains.\\n\\t\\t\\tiv. The tutor simulator section describes how RoboTutor functions. The student simulator describes how the knowledge tracing model works. Example differences between activities, skills, steps, etc. would make the content clearer. \\n\\t\\t\\tv. In the tutor simulation section it states that the child can select activities, so this part is replaced by RL decision-making in agent type 3 and 4, right? This relates to my question about RoboTutor and this work. I am understanding that the current work simulates the exact decision-making process of RoboTutor but can vary/change that process according to the different agent types. Is that correct?\\n\\t\\t\\tvi. Bayesian Knowledge Tracing needs a citation and a brief explanation in a background section.\\n\\t\\t\\tvii. HOT-DINA needs a citation and a brief explanation in a background section.\\n\\t\\t\\tviii. Item Response Theory needs a citation and a brief explanation in a background section.\\n\\t\\t\\tix. What is the difference in computational cost between a BKT approach and HOT-DINA?\\n\\t\\t\\tx. A clearer highlighting of what data was used would make it easier to read the article. What was the contents of the data collected to enable knowledge tracing in the student simulator? This also ties in to the comment about examples. Adding a running example of an activity, skill, step for the tutoring task would enable a description of what data is collected to measure student knowledge in the data set.\\n\\t\\t\\txi. What do the guess, slip, learn, etc. Parameters measure?\\n\\t\\t\\txii. \\\"We use MCMC sampling for Bayesian inference with PyStan rather than the OpenBUGS Gibbs sampling package used in the original HOT-DINA work because PyStan is faster and handles larger datasets.\\\" This line needs references for all software used, but most importantly, a reference to the original HOT-DINA work is necessary.\\n\\t\\t\\txiii. The sole paragraph on page 3 is difficult to parse and seems important to understand how the student simulator works. The paragraph is dense and conversational in style referencing a sequence of steps without making them clearer and referencing equations that are on the next page. It would be much clearer to have the exposition and equations interspersed and use pseudo-code or a flowchart to explain the steps performed to simulate the student (at least a list).\\n\\t\\t\\txiv. \\\"We can train different types of RL agents depending on their state space and range of actions, which depend on how far they depart from RoboTutor\\u2019s current decision policy.\\\" This would make sense as the start of a new section on the experimentation.\\n\\t\\t\\txv. A stronger partitioning of content would also be achieved by calling a potential new section at this point methodology, experiments, or evaluation. This would also help organise the next section of state, action, and agent types into a concrete experiment for which the paper is describing the state and action spaces.\\n\\t\\t\\txvi. In section 3.2, it is confusing to have states, then actions, then agent types described. It seems like one experiment with 4 experimental variants (agent types) and a baseline (RoboTutor).\\n\\t\\t\\txvii. Table 2 does not convey much more information than the explanation before it.\\n\\t\\t\\txviii. Figure 4 is very difficult to parse. Instead of showing 36 subplots (with 4 empty subplots) for 8 students and 4 agents comparing against the RoboTutor baseline, it would be far easier to compare performance against students or against agent types, by combining all 8 student type runs for each variant into a single variance-shaded run in a single graph (e.g. using https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.fill_between.html). That would allow for much higher information density and an at a glance comparison between the 5 runs being compared combined across all 8 students. At the very least this should be done by combining all 5 runs into 8 separate graphs, though the previous approach is preferable.\\n\\t\\t\\txix. Related work is far too brief and does not make clear what is being compared for many cited works. E.g. It isn't clear why the works that specify reward a certain way in Doroudi et al. (2019) show a disadvantage to the current approach, the works in table 4 don't specify why the current approach is an advance over their contributions.\\n\\t\\td. Reproducibility: \\n\\t\\t\\ti. Section 3 (and the paper in general) contains far too little information about the policy representation, learning hyperparameters, network architecture, etc. to understand the contribution.\\n\\t4. Recommendation: \\n\\t\\ta. Per the weaknesses in the review above, I recommend the paper for rejection. I don't think the weaknesses in experimental rigour can be fixed in time, content space, or degree to support acceptance. The significant editing required to fix the other issues also seem unrealistic in time and space.\\n\\t5. Minor Issues\\n\\t\\ta. \\\"(Previous methods used reward=0 or 1 based on correct attempt or something else. Useful to mention this?)\\\" There is a comment remaining in the paper that should have been removed.\", \"rating\": \"2: Strong rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
sgnp-qFYtN | Sparsifying Networks via Subdifferential Inclusion | [
"Sagar Verma",
"Jean-Christophe Pesquet"
] | Sparsifying deep neural networks is of paramount interest in many areas, especially when those networks have to be implemented on low-memory devices. In this article, we propose a new formulation of the problem of generating sparse weights for a neural network. By leveraging the properties of standard nonlinear activation functions, we show that the problem is equivalent to an approximate subdifferential inclusion problem. The accuracy of the approximation controls the sparsity. We show that the proposed approach is valid for a broad class of activation functions (ReLU, sigmoid, softmax). We propose an iterative optimization algorithm to induce sparsity whose convergence is guaranteed. Because of the algorithm flexibility, the sparsity can be ensured from partial training data in a minibatch manner. To demonstrate the effectiveness of our method, we perform experiments on various networks in different applicative contexts: image classification, speech recognition, natural language processing, and time-series forecasting. | [
"neural networks",
"pruning after training",
"weight pruning",
"proximal operator",
"fixed point iteration"
] | Reject | https://openreview.net/pdf?id=sgnp-qFYtN | https://openreview.net/forum?id=sgnp-qFYtN | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"z50w1pibRaz",
"x2-eo0T2S2v",
"BXlOub-nd0F",
"yE68Pl20qX",
"Uogc9ddyUJ",
"P7CJus0HcCs",
"aXt4gvA3puI",
"c5orApcaLtc",
"rBN5QqNMOS",
"VxTDcltfegt",
"f1TE2IKKj3j"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040356968,
1606303043589,
1606233196260,
1605904912733,
1605904708559,
1605904655730,
1605904407909,
1603987847657,
1603896428398,
1603866094841,
1603726168261
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3450/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3450/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3450/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3450/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3450/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3450/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3450/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3450/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3450/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3450/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"I have serious concerns about how experiments are reported in this paper. Most methods tried to compare at an iteration complexity of roughly 100 epochs because it is known more computation improves performance very significantly but the computational resources are limited for many researchers, especially in academia. While this convention may not be the ideal way to compare different methods, for fairness, this practice has been followed in most of previous papers.\\n\\nUnfortunately this paper disregarded this practice, and on Imagenet the reported results from previous works were mixed at 100 epochs (e.g. STR) and at 500 epochs (rigL \\u2014 which was explicitly marked to be 5x in the original paper) without any clarification, and the only other method in the table showing comparable performance to the proposed method, LRR, also requires many more than 100 epochs. Moreover, the authors did not explicitly disclose the equivalent epochs of their algorithms in the Imagenet experiments, and this is not acceptable. Based on the information inferred from the current writing, it is extremely likely that significant unfair advantages were given to the proposed algorithms. \\n\\nSince the authors did not report experiments appropriately, this paper cannot be accepted in its current form regardless of other potential merits of the proposed methods. I hope the authors view this outcome positively, and proactively fix the problem. If in revised versions, the experiments are reported according to the common practice, I am sure the work would become publishable.\"}",
"{\"title\": \"Appreciating the quick response and further feedback\", \"comment\": \"A warm thanks to the reviewer for giving time to the revised submission, increasing their score, and providing valuable feedback.\\n\\n*The RigL results that you...*\\n\\nTable 3 now reflect the RigL resuls for 1x training time. We have also mentioned this in the text.\\n\\n*Your should be clear...*\\n\\nWe have changed all \\u201dFLOPs\\u201d to \\u201dinference FLOPs\\u201d in the text and table captions.\\n\\n*You should report how you...*\\n\\nModern CPUs and GPUs support fused multiply-add (FMA) making one multiplication and addition a single floating point operation (https://en.wikipedia.org/wiki/FMA_instruction_set). In our case 1 FLOP = 1 MAC, we have reported this in the Appendix D.\\n\\nThanks for providing the working git link for Zhu & Gupta implementation.\"}",
"{\"title\": \"Response 1\", \"comment\": \"Thank you to the authors for the updates. I think the addition of FLOP calculations & LRR as an additional baseline greatly improved the experimental results. I have a couple remaining concerns\\n\\n1. The RigL results that you did not run (80%/90% sparsity in table 3) were run with 5x the number of training steps, but it looks like the experiments that you ran for 60% sparsity and 96.5% sparsity were run with less (I assume 1x). I think the best way to fix this oddity is for you to report their 1x training time results for 80%/90% sparsity in your table and then acknowledge in the text that RigL is a technique designed for sparse-to-sparse training, which could enable additional training time thanks to the training time FLOP savings.\\n2. You should be clear that when you say \\\"FLOPs\\\" you mean FLOPs at inference time. This is clear in the introduction but some of the captions and text in the experiments section does not note this.\\n3. You should report how you calculated FLOPs. They are often counted differently (e.g., with a multiply-add counted as 1) and it's easy to make mistakes in these calculations.\\n\\nThe technique of Zhu & Gupta is implemented in the TensorFlow model pruning library (https://github.com/google-research/google-research/tree/master/model_pruning). I've increased my score based on the authors updates. I'm open to increasing it further if the authors address my above concerns.\"}",
"{\"title\": \"Response to Reviewer's comments\", \"comment\": \"Thank you very much for taking time to review our paper. We greatly appreciate your criticisms and suggestions! Below, we address all concerns raised.\\n\\n*Some results presented for existing...*\\n\\nWe ran experiments for which we did not find results in the literature. In case of RigL for 60\\\\% and 96.5\\\\% sparsity, we used the code provided by the authors as it is and specified the desired sparsity in the training script by setting the other hyperparameters as in the paper. We did the same for all other benchmarks. We believe that we could strengthen our experimental analysis by doing 3 runs with different seeds for all the methods, datasets and networks in order to draw more precise conclusions. We will report this in the final version of paper since it requires computation time.\\n\\n*Using RigL as the \\\"state-of-the-art\\\"...*\\n\\nWe completely agree with reviewers point that RigL might not be fairly compared with SIS. Following the reviewer's comments, we have performed experiments with Learning Rate Rewinding method proposed by Renda et al., 2020. This approach has been recognized to work best among existing methods that apply magnitude based pruning on pretrained networks. Please check the revised submission.\\n\\n*All results should be reported...*\\n\\nWe have included FLOPs in Tables 2, 3, and 4.\\n\\n*Make it clear that some algorithms...*\\n\\nWe have specified this in the first paragraph of Section 4 and also in Section 2.\\n\\n*Add comparisons with other other algorithms...*\\n\\nWe have included the state-of-the-art magnitude based pruning method that works on pretrained networks, Learning Rate Rewinding (LRR) proposed by Renda et al., 2020.\\nWe would be happy to compare SIS with the interesting approach by Zhu and Gupta, but we were not able to find publicly available codes.\\n\\n\\n*The brackets are backwards...*\", \"we_use_the_european_mathematical_notation\": \"$]a,b[$ is the open interval with lower bound $a$ and upper bound $b$\\n\\n*I would encourage the authors to explain...*\\n\\nThe convex analysis tools are now introduced in a more comprehensive way in Section 3.1.\\nDue to the lack of space, it is difficult to give more introductory materials. There however exist a number of nice tutorial papers about proximal methods (some of which we cite) and also the website http://proximity-operator.net gathers a lot of useful information.\"}",
"{\"title\": \"Response to Reviewer's comments\", \"comment\": \"Thank you very much for your positive review. We greatly appreciate your comments and suggestions! Below, we address all concerns raised.\\n\\n*The computational characteristics...*\\n\\nIn case of SIS and LRR, we use pretrained networks for compression. All other methods train a sparse network from scratch and take more epochs then training their dense counterparts. LRR on other hand goes thorough multiple rounds of pruning and fine-tuning to achieve desired sparsity. In case of SIS some time in form of algorithm 1 and 2 iterations is required to identify a sparse network and then few epochs for fine-tuning of sparse network. Since SIS, is applied in parallel on all layers of a network it is hard to compare different methods. We have however provided pruning and retraining details of SIS in Appendix D: Experimental Setup.\"}",
"{\"title\": \"Response to Reviewer's comments\", \"comment\": \"Thank you very much for taking time to review our paper. We greatly appreciate your comments and suggestions! Below, we address all concerns raised.\\n\\n*The proposed algorithm does not significantly improves...*\\n\\nOur method is based on principles different from existing methods, but it achieves comparable or better accuracies across various networks and datasets. In addition, in all experiments, we observe that sparse networks generated by our method have best run time efficiency (least FLOPs). We think that this is is an evidence that our method better accounts the mathematical properties of activation function in the sparsification process.\\n\\n*The claim in the conclusion...*\\n\\nWe provide an empirical convergence analysis in Appendix E. Theoretical convergence proofs for this type of optimization problem has been shown in the cited papers. \\n\\n*The core optimization problem...*\\n\\nInded, the inner loop corresponding to the subgradient projection algorithm may introduce numerical errors because of the limited number of subiterations. However, first of all, the convergence of the Douglas-Rachford algorithm remains theoretically guaranteed if those errors are summable (Combettes & Pesquet, 2007).\\nSecondly, we never observed any misbehavior in our practical experiments as we set the number of subiterations large enough. The choice of $\\\\eta$ (one single parameter for each problem) was made empirically so as to reach the desired compression performance. We search the best value for $\\\\eta$ by doing multiple experiments with the layer that has the highest number of parameters.\\n\\n*Should be nice if authors can provide...*\\n\\nWe have provided the requested pseudo-code in Section 3.6 of the revised submission. We will release code, weights, and all experiment logs via our Polyaxon interface.\"}",
"{\"title\": \"Response to Reviewer's comments\", \"comment\": \"Thank you very much for taking time to review our paper. We greatly appreciate your comments and suggestions! Below, we address all concerns raised.\\n\\n*In the proposed model, we need to choose proper value of each layer...*\\n\\nWe agree with the reviewer that different layers may have different optimal values of $\\\\eta$ for a target sparsity level. We first experimented with LeNet-FCN on MNIST to find how sensitive is the choice of $\\\\eta$ at different layers to achieve a preset sparsity. What we found is that there is not much difference between $\\\\eta$ values of any two different layers of a given network. In order to keep parameter tuning practical and to meet our compute budget, we did not search for optimal $\\\\eta$ values for the all layers of this network according to our experiment. For a given network, we thus choose the $\\\\eta$ value that we found best by experimenting with the layer that has the highest number of parameters and then use this $\\\\eta$ to compress the whole network.\\n\\n*Because of my above concern, I recommend...*\\n\\nWe have now included benchmarks and results for all methods applied on ResNet50 on ImageNet at 96.5\\\\% sparsity. Please see Table 3 in the revised submission.\"}",
"{\"title\": \"In this paper, the authors propose a new model compression method based on subdifferential inclusion.\", \"review\": \"In this paper the authors propose a new model compression method based on subdifferential inclusion. The key idea is to make the outputs of the neurons in the sparse and dense networks at the same input close enough. They rewrite the activation function as the proximity operator of a proper convex function and finally formulate the compression problem into a constraint minimization problem using the technique of subdifferential inclusion. They conduct a series of experiments to evaluate the performance of their proposed methods.\", \"positive_aspects\": \"1.\\tThe idea of this paper makes sense. \\n2.\\tThe experiment results show that the proposed method can achieve better performance than the baselines under this paper\\u2019s experimental setting. \\n3.\\tThis paper is well written and easy to follow.\", \"my_concerns_are\": \"1. In the proposed model, we need to choose proper value of $\\\\eta$ for each layer, which is the required accuracy of the neuron\\u2019 output after compression. I understand that as reported by the authors in this paper, only in few experiments, they need to search for a good $\\\\eta$. However, I think it is non-trivial to find proper value for $\\\\eta$. I mean that different layers could have different tolerances on accuracy. Since our goal is to achieve high test accuracy in the compressed network instead of the accuracy of the neuron\\u2019 output after compression, if we can find better values for $\\\\eta$ we could achieve higher test accuracy. In other words, as it is challenging to find near optimal $\\\\eta$ for each layer, we could not reduce the network into very small size. \\n\\n2. Because of my above concern, I recommend the authors to give more results of compressing networks into much smaller sizes. For example, in RigL, the size of ResNet50 on ImageNet is compressed by more than 97%.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"The paper propose a compression scheme for NN that does not achieve SOTA but is an appealing competitors according the presented numerical experiments.\", \"review\": [\"The paper propose a network compression algorithm by exploiting a reformulation of activation function as proximity operator. The latter is an optimization problem whose optimality condition reveals constraints on the weight matrix W of the neural net. The main idea is then to \\\"biasedly\\\" select W as a minimizer of a sparsity inducing penalties under a relaxation of the previous optimality conditions. The authors provide details on solving such problem as well as numerical experiments that leads to similar results than competitors.\", \"The proposed algorithm does not significantly improves the accuracy of estimators (eg convNet in Cifar) when compared with actual methods.\", \"The claim in the conclusion \\\"SIS is reliable in term of convergence guarantee\\\" is not supported by clear evidence. I did not find any such convergence proof in the paper. Once the SIS compression is used, it unclear that the same accuracy than the non-compressed NN is preserved.\", \"The core optimization problem (7) is solved approximately with Douglas-Rachford iterations. Neither the effect of the optimization error nor the selection of eta is clearly discussed.\", \"Should be nice if the authors can provide a pseudo-code of the overall SIS strategy in a practical deep neural net (not only one layer). As stated, the main idea is lost in the technical details for solving (7) (where there is few or no new contribution).\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Highly impactful contribution with end-to-end results and analysis\", \"review\": \"## Summary\\nThe authors pose sparsification as a subdifferential inclusion problem, a novel formulation that results in quite meaningful results on established benchmarks/tasks. The paper overall is very well-written with a detailed overview of current sparsification techniques and how the proposed method differs.\\n\\n## Pros\\n* Very comprehensive analysis and proofs (which seem correct, although not thoroughly verified)\\n* Empirical results justify this novel approach across the board\\n\\n## Suggestions\\nThe computational characteristics of using SIS has not been characterized in the manuscript; it is no very clear what the complexity of training a large model is using the proposed approach. The authors suggest their training approach is efficient, but do not provide any empirical results or further justification. For example, all of the results in Table 3 and Table 4 can have an additional column that characterizes the time to train.\", \"rating\": \"9: Top 15% of accepted papers, strong accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting technique, but experiments need significant improvement\", \"review\": \"Summary:\\n\\nThe authors propose a new algorithm for inducing sparsity in the weights of neural networks after training. The proposed algorithm exploits the properties of commonly used activation functions to cast the sparsification problem as the minimization of a sparsity measure subject to approximation accuracy constraints. The proposed problem can be solved using convex optimization.\", \"pros\": \"The authors\\u2019 insights about popular activation functions and the approach used to cast the sparsification problem as a convex optimization problem are clever and interesting. The authors presented experiments on a wide range of deep learning models and tasks.\", \"cons\": \"\", \"some_of_the_details_of_the_authors_experiments_are_not_clear_or_potentially_misleading\": \"1. Some results presented for existing techniques are from those techniques\\u2019 original papers, but some results were re-run by the authors. For example, consider the ResNet-50 results on ImageNet (table 3, left). The RigL authors did not present results at 60% sparsity, and Appendix D does not include details on how this number was generated beyond the authors using the released code with RigL. The numbers at 80% and 90% sparsity are taken from the RigL paper. However, these numbers were achieved with 5x the number of training steps which was enabled by the reduced number of FLOPs used by RigL during training because it maintains a constant level of sparsity throughout the training process. They also use non-uniform distributions of sparsity across the layers of the network which affects the number of FLOPs in the resultant network. The author\\u2019s of this paper report a lower top-1 accuracy at 60% sparsity than the RigL paper reports at 80% sparsity, which leads me to believe that the training conditions (time, sparsity distribution) are not the same. Similarly, all of the RigL results for MobileNet family models (table 3, right) appear to have been run by the authors and the training setup details are not clear. For these results generated by the authors of this paper, they should also detail the amount of hyperparameter tuning performed for these baseline, as this can make a large difference in accuracy. I focused here on RigL because it appears to be the most commonly used baseline by the authors of this paper, but it seems likely that these observations apply to other techniques as well.\\n2. Using RigL as the \\u201cstate-of-the-art\\u201d baseline for most comparisons is not entirely fair given it has additional capabilities (i.e., the ability to enable sparse training by maintaining a constant number of parameters across training) compared to the authors\\u2019 proposed post-training sparsification algorithm. Sparse training (i.e., sparse-to-sparse training) is known to be a more difficult problem than dense-to-sparse training [1] or post-training sparsification. It is good to include RigL for comparison, but this distinction should be made clear and other techniques that have comparable ability to the proposed technique should be included as well.\", \"my_suggestion_to_the_authors_are_the_following\": \"1. All results should be reported as accuracy with a given parameter count and accuracy with a given FLOP count. Ideally, these tradeoff curves should be plotted across a range of accuracies and FLOP counts. This helps to avoid many of the pitfalls in the comparisons of model compression approaches details by [2] and [3].\\n2. Make it clear that some algorithms under comparison have additional capabilities compared to the proposed approach (e.g., RigL with sparse training).\\n3. Add comparisons with other algorithms of similar capability to the proposed approach. The magnitude pruning approach of Zhu & Gupta [1] would be ideal for this I believe.\", \"comments\": \"The brackets are backwards in the last paragraph on page 4. I would encourage the authors to explain more of the background of their approach (proximal operators, convex optimization, etc.) in sections 3 and 4. Many of those working in model compression who would be interested in this work will not be familiar with these topics.\", \"references\": \"1. https://arxiv.org/abs/1710.01878\\n2. https://arxiv.org/abs/1902.09574\\n3. https://arxiv.org/abs/2003.03033 \\n\\n[original score: 3 (clear rejection)]\\n11/24: Updated score based on updates from the authors. The addition of FLOP counts and more baselines in the experiments section greatly improved the paper. The proposed approach appears to achieve excellent FLOP-accuracy tradeoffs relative to existing approaches.\\n\\n[2nd score: 6 (marginal acceptance)]\\n11/30: Updated score based on updates from the authors. The discrepancy with some baseline numbers has been resolved and the authors added clarifying information to the paper regarding the counting of FLOPs.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
Yj4mmVB_l6 | Two steps at a time --- taking GAN training in stride with Tseng's method | [
"Axel Böhm",
"Michael Sedlmayer",
"Ernö Robert Csetnek",
"Radu Ioan Bot"
] | Motivated by the training of Generative Adversarial Networks (GANs), we study methods for solving minimax problems with additional nonsmooth regularizers.
We do so by employing \emph{monotone operator} theory, in particular the \emph{Forward-Backward-Forward (FBF)} method, which avoids the known issue of limit cycling by correcting each update by a second gradient evaluation.
Furthermore, we propose a seemingly new scheme which recycles old gradients to mitigate the additional computational cost.
In doing so we rediscover a known method, related to \emph{Optimistic Gradient Descent Ascent (OGDA)}.
For both schemes we prove novel convergence rates for convex-concave minimax problems via a unifying approach. The derived error bounds are in terms of the gap function for the ergodic iterates.
For the deterministic and the stochastic problem we show a convergence rate of $\mathcal{O}(\nicefrac{1}{k})$ and $\mathcal{O}(\nicefrac{1}{\sqrt{k}})$, respectively.
We complement our theoretical results with empirical improvements in the training of Wasserstein GANs on the CIFAR10 dataset. | [
"steps",
"time",
"gan training",
"stride",
"tseng",
"training",
"minimax problems",
"generative adversarial networks",
"gans",
"methods"
] | Reject | https://openreview.net/pdf?id=Yj4mmVB_l6 | https://openreview.net/forum?id=Yj4mmVB_l6 | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"xJguOR9Hum",
"tMiFhXZIAO9",
"VDWmwqsTZP",
"_Nyy-fdto7a",
"xF3tJX6PtL",
"209qADXJiAy",
"_ZHcJQBhyxz",
"CvBWcXq43u_",
"9qbmDI2d5L-",
"lVwYVd9lbOV",
"XWHQfvu5_6G"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040509838,
1605353591715,
1605295537875,
1605282106163,
1605282060051,
1605282007860,
1605281964492,
1603908921873,
1603848615039,
1603137865534,
1602568759768
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3446/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3446/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3446/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3446/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3446/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3446/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3446/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3446/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3446/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3446/AnonReviewer4"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This paper provides a unified view of some known methods for monotone operator inclusion problems like Forward-Backward-Forward (FBF) and OGDA, and provides new convergence results for the stochastic version of a variant of FBF called FBFp. All reviewers initially recommended rejection. The rebuttal and the manuscript update addressed several concerns from the reviewers, though the general consensus after rebuttal was still that the paper lacked in significance for the ICLR community. The AC thinks that the paper could make an interesting overview paper in a more optimization / theoretically minded venue.\"}",
"{\"title\": \"we meant distance to solution\", \"comment\": \"Thank you for pointing out the article. We thought 'distance of the last iterate to the solution set' was meant, while the mentioned paper uses 'norm of the gradient' and 'gap'. Unfortunately we do not know whether this analysis can be adapted to the regularized setting, even under the stronger assumption of Lipschitz second derivatives.\"}",
"{\"title\": \"Last iterate convergence rates\", \"comment\": \"Please see the following paper for the last iterate analysis:\\n\\n- Last Iterate is Slower than Averaged Iterate in Smooth Convex-Concave Saddle Point Problems\"}",
"{\"title\": \"Response to Reviewer4\", \"comment\": \"Dear reviewer, thank you for your time and remarks. For your convenience we highlighted all the changes in the pdf in red.\\n\\nSince some of the points raised in Cons. 1-3 are connected we try and respond to them at the same time.\\n\\n> The authors should try to express why they believe their unification result informs practical applications.\\n> It is not clear that the connection of OGDA to FBF and monotone inclusion provides any new insights about the convergence properties of either method.\\n> [...] any experimental insight that is particular to this work\\n\\nThe unification result itself may not provide much practical benefit although it sheds some light on the stepsize requirements of the different methods (see the fourth comment to R3). The benefit of our work for practical applications lies more in the treatment of regularizers and the fact that FBF requires less evaluations (see the first reply to R3) and is thus able to save computation time (compared to EG) without sacrificing performance.\\n\\n> last iterate convergence of OGDA is still an open problem even in convex-concave problems\\n\\nAs far as we know asymptotic convergence of the iterates is known for OGDA.\\nRegarding convergence *rates* for the (last) iterate(s) we don't think it is possible to obtain such a statement without the use of more restrictive assumptions (see last comment to R3). If talking about rates for the gap function but in terms of the last iterate (instead of the ergodic), this constitutes indeed an interesting and possibly obtainable (albeit more difficult) statement, but was outside the scope of our work.\\n\\n> the intuition behind this gap function [...] is unclear\\n\\nThank you this very valid comment. We added a clarifying remark and intuition in the updated version of our manuscript. Please see also the reply to R1 were we give a short summery.\"}",
"{\"title\": \"Response to Reviewer2\", \"comment\": \"Dear reviewer, thank you for your time and interest. For your convenience we highlighted all the changes in the pdf in red.\\n\\n> The paper takes a long time until it becomes clear what actually the monotone inclusion looks like.\\n\\nSince monotone inclusions are not a well known object in the ML community we wanted to make sure that the concept is properly motivated.\\n\\n> preceded by a long and unnecessary discussion about existing solvers\\n\\nDue to the similarities of the existing methods we think that this exposition is necessary in order to not create confusion. First of all it seems very natural to us to compare FBF with EG since the two methods are so closely related. Secondly, by recycling previous gradients we provide a novel intuition for OGDA/FRB, which we think is relevant and requires us to mention said methods.\\n\\n> p.3 claim that FBF has not been rigorously analyzed for saddle point problems. This is of course not true.\\n\\nWe meant, but did not write explicitly, this statement to be in terms of function values / gap. We added this clarifying specification in the pdf on page 3.\\nAt the same time we would like to point out that in the contribution paragraph and in the conclusion this was already stated more precisely before.\\n\\n> The stochastic FBF has been studied in [...]\\n\\nWe added the reference and clarified the differences with the mentioned paper. In particular, the cited article does not deal with the minimax setting specifically and is therefore not able to make statements about the gap function.\\n\\n> For SFBF we know non-asymptotic convergence rates of the last iterate. This is not mentioned at all.\\n\\nThank you for pointing this out. We mentioned this in the updated version of our preprint and clarified that these rates are in terms of fixed point residual, which is a rather general notion and does not exploit the special structure of the problem. Another key difference between our work and the mentioned paper is that we do not rely on minibatch sizes that go to infinity.\\n\\n> if the aim is to solve the VI over an unconstrained domain, then FBF coincides with EG [...]\\nIn my opinion it would therefore be cleaner to assume at the outset that the domain of $F+\\\\partial r$ is bounded\\n\\nWe do not see a convincing reason to assume that the domain of the problem is bounded. In fact there are many interesting regularizers which do not have a bounded domain. Note further that EG and FBF are not the same method in this regularized (but unconstrained) setting.\"}",
"{\"title\": \"Response to Reviewer3\", \"comment\": \"Dear reviewer, thank you for your interest and remarks. For your convenience we highlighted all the changes in the pdf in red.\\n\\n> the authors could compare FBF with EG for a toy example like Eqn. (13)\\n\\nThe two methods are compared for this problem in Figure 1. The difference between the two methods is not so much the regularization term (which is part of the objective function) but how it is treated: per iteration EG requires two evaluations of the projection (prox) corresponding to the regularizer, while FBF only needs one.\\nNote that in the absence of regularizers or constraints (in the problem formulation) EG and FBF reduce to the same method.\\n\\n> the authors should clarify the motivations and importance of adding regularizers. In the current version of the paper, section 2.4 is vague and doesn't explain the role of regularizers well in GAN training.\\n\\nWe added some remarks about regularizers used for GAN training in Section 2.4 mentioning the box constraints in the original WGAN formulation and spectral normalization for SN-GAN.\\n\\n> It was shown in [1] that the ergodic (averaged) iterates of extra-gradient converge with a rate of O(1/k)\\n\\nThank you for pointing this article out, which we were not aware of. We added the reference and adapted the text accordingly. Note however that our result still provides a generalization in the sense that we also cover regularizers which encompass, but are not limited to constraints.\\n\\n> a similar convergence analysis has been done for EG and OGDA.\\n> Could the authors clarify the novelty/difference of your proof technique for FBF?\\n\\nSince OGDA, EG and FBF are similar they naturally rely on similar proof techniques. Through our unified approach it becomes easier to draw a distinction between and highlight the similarities of OGDA and FBF. Because of this we are able to pinpoint where the different stepsize requirements arise (OGDA only allows for half as big of a stepsize compared to FBF, and thus possibly negate every saving stemming from the reduced amount of gradient evaluations..). In particular, it indicates why OGDA tends to requires a smaller stepsize than FBF/EG in applications.\\n\\n> It was shown in [2] that extra-gradient is NOT robust to gradient noise in convex-concave minimax optimization.\\nCould the authors comment on that and explain why FBF could achieve the rate of $O(1/sqrt(k))$ in the stochastic\\nsetting (while EG fails)?\\n\\nReference [2] itself mentions that EG exhibits the same $O(1/\\\\sqrt{k})$ rate, see Table 1 in that paper. These two seemingly conflicting statements stem in our opinion from the fact that one is with respect to the sequence of (last) iterates itself and the other one is in terms of the ergodic (averaged) ones. The mentioned issue is of course relevant and a version of FBF which is robust to noise would be interesting.\\n\\n> Is it possible to derive the last iterate convergence rate?\\n\\nThis is unknown and even unlikely as convergence rates for the iterates can typically only be obtained under more restrictive assumptions as strong convexity/monotonicity (or error bounds).\"}",
"{\"title\": \"Response to Reviewer1\", \"comment\": \"Dear reviewer, thank you for your interest and the comments. For your convenience we highlighted all the changes in the pdf in red.\\n\\n> Why do you introduce h(y) here?\\n\\nThe functions $h$ and $f$ here act as regularizers. These could reflect the box constraints in the case of weight clipping or the spectral normalization in the case of SN-GANs. We added an explanatory remark in Section 2.4 of the manuscript. These functions are mentioned explicitly and treated separately in the algorithms because of their potential nonsmoothness.\\n\\n> Later the author restrict Equation(1) to a deterministic version\\n\\nWe restrict to the deterministic version only in the introductory Section 2. We try to differentiate between the stochastic and the deterministic setting not via the expectation in the objective function but whether stochastic or batch gradient updates are used. This was done purely for ease of notation in order to not having to write the expected value whenever the objective function appears. We tried to clarify this in the manuscript right below eqn (1).\\n\\n> I cannot understand the first sentence of Section 2.2.\\n> I don't know why the problem (1) can be written as Equation (8)\\n\\nWhat we meant was that the monotone inclusion corresponds to the first order optimality condition of our main problem.\\nIn general finding a first order critical point stipulates a necessary condition for being a local solution. Due to the assumed convexity of the problem every critical point is indeed a solution.\\nThus, solving the (monotone) inclusion is equivalent to solving the minimax problem and finding a saddle point. We clarified this point in the updated version.\\n\\n> the authors provided [...] lots of iterative methods. It is difficult for me to distinguish\\nwhich are completely new\\n\\nWe rephrased this section slightly in order to highlight the origin of the different methods.\\n\\n> could you explain more about why $G_B (w)$ in Equation (10) is defined as this form?\\n\\nWe added a game theoretic interpretation of the minimax gap. In short it corresponds to the payoff each player can achieve by choosing the best response given the (suboptimal) strategy of their opponent. A set of strategies thus corresponds to a small gap if only a small payoff can be achieved for either player by playing the best strategy given the current one.\\n\\n> In Section 3.2, the authors provided a generalized FBF algorithm. Isn't this Algorithm 3.1 a combined re-written\\nversion of Equation (4) and Equation (5)?\\n\\nIndeed. It is a template in order to reconstruct these two methods and thus highlights how they are connected.\\n\\n> There is a big gap between Equation (9) [...] and the experimental results\\n\\nIn Section 2.2 we tried to make the connection between monotone inclusions and minimax problems clear. If it helps we can explicitly write down our methods for saddle point problems (which includes the training of GANs) in the Appendix.\\n\\n> there are no open source codes provided\\n\\nThese should be present in the supplementary material. If there is a problem with the zip file or the download, please let us know.\\n\\n> Although, the authors stated that \\\"Due to the theoretical nature of this work [...]\\\"\\n\\nWe rephrased the beginning of the experiment section in order to highlight our practical contributions.\"}",
"{\"title\": \"Many unclear or doubting points\", \"review\": \"In this paper,\\nthe authors first formulate the optimization problem of GANs as an abstract minimax problem (Equation(1)).\\nAs compared to the original optimization objective of Goodfellow's GANs,\\nthere is an additional term $h(y)$.\\nWhy do you introduce $h(y)$ here? Just for facilitating the adoption of the MONOTONE INCLUSIONS on it?\\nThe authors should provide a clear explanation about this point.\\n\\nLater the author restrict Equation(1) to a deterministic version,\\nwhich means that the noise input of GANs will no longer be considered.\\nThe noise input is an important ingredient of GANs.\\nDespite many new variants of the GANs,\\nat least, the noise input is import for the Goodfellow's GANs, which is adopted in this submission as a staring point.\\nIt is very hard for me to decide whether this simplification is appropriate.\\nExcept Algorithm 3.2 and Theorem 3.2 which suddenly provide stochastic versions,\\nall the following results are on this deterministic version.\\nIn my opinion, the authors should provide more explanations on this point.\\n\\nI cannot understand the first sentence of Section 2.2, after read it many times.\\nWhat is the exact necessary and sufficient optimality condition for the coupling function being convex-concave and differentiable?\\nBefore Equation (2), the authors didn't explain what is monotone and what is monotone inclusion.\\nI don't think these concepts are very famous in machine learning community.\\n\\nIn Section 2.3,\\nthe authors provided the introduction of lots of iterative methods.\\nIt is difficult for me to distinguish which are completely new findings for solving their inclusion problem and which are the existing results.\\nThe authors completed a literature review here.\\n\\nIn Section 2.4,\\nI don't know why the problem (1) can be written as Equation (8).\\nCould you provide more explanations about this point?\\nLimited space of one submission should be on important points.\\n\\nIn Section 3,\\ncould you explain more about why $G_B(w)$ in Equation (10) is defined as this form?\\nJust for facilitating the proof of Theorem 3.1 and Theorem 3.2?\\n\\nIn Section 3.2,\\nthe authors provided a generalized FBF algorithm.\\nIsn't this Algorithm 3.1 a combined re-written version of Equation (4) and Equation (5)?\\n\\nThere is a big gap between Equation (9), Theorem 3.1 and Theorem 3.2 and the experimental results shown in Section 4.\\nBesides, there are no open source codes provided.\\nIt is very hard for me to figure out the details of the experiments and meantime to check the reproducibility of this paper.\\n\\nAlthough, the authors stated that \\\"Due to the theoretical nature of this work, the aim of this section is not to achieve new state-of-the-art\\nresults.\\\"\\nI don't think optimization is a theoretical branch of our machine learning community.\\nIf a proposed optimization method cannot be proved to be very useful in certain ares or specific tasks,\\nit will be very doubting.\\nIf we intend to do theoretical contributions,\\nwe should try to prove the theoretical properties or convergence bounds for the existing useful optimization methods.\\n\\nSince ICLR is a highly selective conference,\\nthe originality and significance of one submission will always be in the first priority.\\nI cannot accept this paper in current state.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Contributions seem incremental\", \"review\": \"This paper is well-written, though I think the difference with extra-gradient can be made much clearer. For example, the authors could compare FBF with EG for a toy example like Eqn. (13). As far as I can tell, the main difference with extra-gradient is just the regularization term. From this standpoint, the authors should clarify the motivations and importance of adding regularizers. In the current version of the paper, section 2.4 is vague and doesn't explain the role of regularizers well in GAN training.\\nBesides, a similar convergence analysis has been done for EG and OGDA (the authors don't even cite properly), so I believe the convergence analysis of this paper is not novel. Given these reasons, I suggest the rejection of this paper and give a score of 4.\", \"comments\": \"- It was shown in [1] that the ergodic (averaged) iterates of extra-gradient converge with a rate of O(1/k) which is the same as FBF. Could the authors clarify the novelty/difference of your proof technique for FBF?\\n\\n- It was shown in [2] that extra-gradient is NOT robust to gradient noise in convex-concave minimax optimization. Could the authors comment on that and explain why FBF could achieve the rate of O(1/sqrt(k)) in the stochastic setting (while EG fails)?\\n\\n- In GAN training, we typically care more about the last iterate since averaging could actually hurt the performance when the loss surface is highly nonconvex. Is it possible to derive the last iterate convergence rate? \\n\\n\\nI'm willing to increase my rating if the authors could resolve some of my concerns, especially my concern on the novelty of the analysis.\\n\\n\\n-------------\\n**I've read the authors' response. I'm still concerned with the novelty of the paper given there are similar results for EG/OGDA. Therefore, I stick to my original rating.**\", \"references\": \"[1] Convergence Rate of O(1/k) for Optimistic Gradient and Extra-gradient Methods in Smooth Convex-Concave Saddle Point Problems, 2019.\\n\\n[2] Explore Aggressively, Update Conservatively: Stochastic Extragradient Methods with Variable Stepsize Scaling, 2020.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Report on \\\"Two steps at a time --- taking GAN training in stride with Tseng's method\\\"\", \"review\": \"Summary: This paper introduces the forward-backward-forward splitting for variational inequalities. The main results are an asymptotic convergence results and a non-asymptotic convergence results using a restricted merit function. A new method, FBFp, is introduced and studied. Complete proofs are given. Preliminary numerical results obtained by training GANs are reported.\", \"pros\": [\"complete proofs\", \"A new stochastic operator splitting method based on Tseng's FBF is introduced in which the operator needs to be evaluated once per iteration. This splitting, called FBFp, is indeed new, and has the potentially of being of practical relevance.\", \"Preliminary numerical results on standard GAN architectures.\"], \"cons\": [\"The paper takes a long time until it becomes clear what actually the monotone inclusion looks like. It seems that the problem of interest is formulation in eq. (9), preceded by a long and unnecessary discussion about existing solvers. It would have been much more accurate to simply start with the problem formulation, then propose your solution method, followed by a critical explanation of the contribution.\", \"p.3 claim that FBF has not been rigorously analyzed for saddle point problems. This is of course not true. Even the original paper by Tseng (A MODIFIED FORWARD-BACKWARD SPLITTING METHOD FOR MAXIMAL MONOTONE MAPPINGS, SICON 2000) discusses the application to saddle point problems. See Example 5 in that paper.\", \"The stochastic FBF has been studied in Bot et al. Mini-batch Forward-Backward-Forward Methods for solving Stochastic Variational inequalities, forthcoming in Stochastic Systems. Note that the Arxive version of that paper is available since 2019. Overall, the paper contains only marginal contributions to the state-of-the-art.\", \"Only convergence rates for the ergodic average is provided. It is known that the ergodic average might destroy important features of the true solution, such as sparsity. For SFBF we know non-asymptotic convergence rates of the last iterate. This is not mentioned at all.\", \"I have some doubts that the restricted merit function is the appropriate one here. Note if the aim is to solve the VI over an unconstrained domain, then FBF coincides with EG, and there is nothing to be analyzed. The interesting case is thus only the constrained case. These constraints are usually encoded in the non-smooth part of of eq. (8), so there is no need to write this explicitly. In my opinion it would therefore be cleaner to assume at the outset that the domain of $F+\\\\partial r$ is bounded. The gap function used can in fact be traced back to Facchinei & Pang (2003) and is most likely even longer in use than that.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Unifying Optimistic Gradient Descent Ascent and Forward Backward Forward For Convex-Concave Optimization\", \"review\": \"The authors in this paper, inspired by the applications of min-max optimization in GANs, study the problem of min-max optimization for convex-concave functions. The main contribution of the paper is proving novel convergence results for Forward-Backward-Forward (FBF) algorithms as well as Optimistic Gradient Descent Ascent (OGDA) based on tools from monotone inclusion problems. Their convergence results cover both deterministic and stochastic settings and the rates of convergence for suitably chosen gap function are non-asymptotic. Finally, they apply their algorithms both on toy problems but also on training GANs on CIFAR-10.\", \"pros\": \"1) To the best of my knowledge, the connection between OGDA and monotone inclusion problems is new.\\n2) Convergence results are non asymptotic for the specified gap function.\", \"cons\": \"1) The connection of this work with GANs is a bit tenuous because, as the authors also acknowledge, training GANs is a non-convex non-concave min-max problem. The authors should try to express why they believe their unification result informs practical applications.\\n2) This lack of connection is reflected in the experimental section as well. Most experiments re-establish that optimism, extragradient updates or regularization are beneficial for min-max optimization, observations that are already widely known. Again, any experimental insight that is particular to this work, would go a long way towards closing this gap.\\n3) It is not clear that the connection of OGDA to FBF and monotone inclusion provides any new insights about the convergence properties of either method. It would be very helpful, if the authors provided any additional intuition why their result could be used to answer open questions related to OGDA or FBF. For example, last iterate convergence of OGDA is still an open problem even in convex-concave problems.\\n4) While the gap function used allows the authors to provide non asymptotic guarantees, the intuition behind this gap function when its value is non-zero is unclear. Does this gap function have any game theoretic interpretation?\\n\\nFor now, I am assigning a weak reject score mainly because it is unclear to me if there are significant implications of this unification result either in theory or practice. I am willing to increase my score substantially if the authors provide additional details that address my concerns outlined above. \\n\\n---------------------------------\\nPost-Rebuttal evaluation.\\n\\nI would like to thank the authors for their detailed answers, especially regarding the interpretation of the gap function.\\nBased on their answers, I decided to increase my score to a 6.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
K4wkUp5xNK | Invariant Causal Representation Learning | [
"Chaochao Lu",
"Yuhuai Wu",
"José Miguel Hernández-Lobato",
"Bernhard Schölkopf"
] | Due to spurious correlations, machine learning systems often fail to generalize to environments whose distributions differ from the ones used at training time. Prior work addressing this, either explicitly or implicitly, attempted to find a data representation that has an invariant causal relationship with the outcome. This is done by leveraging a diverse set of training environments to reduce the effect of spurious features, on top of which an invariant classifier is then built. However, these methods have generalization guarantees only when both data representation and classifiers come from a linear model class. As an alternative, we propose Invariant Causal Representation Learning (ICRL), a learning paradigm that enables out-of-distribution generalization in the nonlinear setting (i.e., nonlinear representations and nonlinear classifiers). It builds upon a practical and general assumption: data representations factorize when conditioning on the outcome and the environment. Based on this, we show identifiability up to a permutation and pointwise transformation. We also prove that all direct causes of the outcome can be fully discovered, which further enables us to obtain generalization guarantees in the nonlinear setting. Extensive experiments on both synthetic and real-world datasets show that our approach significantly outperforms a variety of baseline methods. | [
"outcome",
"invariant causal representation",
"environments",
"data representation",
"generalization guarantees",
"nonlinear setting",
"due",
"spurious correlations",
"machine",
"systems"
] | Reject | https://openreview.net/pdf?id=K4wkUp5xNK | https://openreview.net/forum?id=K4wkUp5xNK | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"Hzxo4gh5goJ",
"GHvZzalZhw",
"aC2WZV-vLo",
"j0nKuTJIL7q",
"ePlVFLUOPjG",
"prtT9EC0R9L",
"AWqPXUSwB9Y"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040407778,
1606222670794,
1606222116089,
1606221799998,
1603896893020,
1603736145202,
1603647136668
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3443/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3443/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3443/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3443/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3443/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3443/AnonReviewer4"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"The paper aims to provide a framework for learning non-linear feature mappings such that are invariant to environments. The critical concern raised by the reviewers is their assumption: that causal features of the label are conditionally independent given the label. But in any DAG, conditioning on a common child (here, the label) renders the parents dependent. Their assumption thus is not going to hold other than on a measure zero set of parameters.\"}",
"{\"title\": \"Response to Reviewer 4\", \"comment\": \"Thank you for your comments.\\n\\n**Question 1.**\\n\\n\\\"*The clarity and organization of the paper could be improved. The algorithm should be moved from the appendix to the main text and the procedure should be described more holistically to give the reader an outline before diving into the details of each component section. The experiments section is also very unclear.*\\\"\\n\\n**Authors' Response**:\\n\\nThank you for the suggestion. We have updated all in the revision.\\n\\n**Question 2.**\\n\\n\\\"*The main issue that remains unclear to me is how the environment variable E is being used explicitly. It doesn\\u2019t seem clear to me that you would generally have access to E, but it is required in the rules necessary to determine the direct causes of Y. What are the E variables being used in each of the experiments?*\\\"\\n\\n**Authors' Response**:\\n\\nIn this paper, the environment variable $E$ is only an environment or domain index. For example, if we have $N$ training environments, then the environment variable $E$ takes value in {1, $\\\\ldots$, N}. We have clarified it in the last paragraph of Section 3.1.\\n\\n**Question 3.**\\n\\n\\\"*The novelty seems somewhat limited. It seems the theoretical results can be divided into (a) results about identifiability of the latent variable model and (b) the method for identifying the direct causes. Some of (a) follows directly from Khemakhem et al. (2020) - it is difficult to determine whether there is sufficient novelty here. (b) follows directly from well known constraint-based and bivariate causal discovery approaches.*\\\"\\n\\n**Authors' Response**:\\n\\nIt is worth emphasizing that our contribution in this paper is to propose **a novel learning paradigm** that enables OOD generalization **in the nonlinear setting**. This challenging problem, which was not solved before, is decently addressed in our paper by creatively integrating some existing methods in a comprehensive manner. Empirical results also demonstrate that our approach significantly outperforms IRM and IRMG in the nonlinear setting. Hence, our work would be a complement to the community of OOD generalization. \\n\\n**Question 4.**\\n\\n\\\"*The experiments are unconvincing. The proposed method outperforms existing approaches in a high noise synthetic data example and a kind of adversarial example where grayscale MNIST is colored in a way that is strongly correlated with the class label (the experimental setup and evaluation metric is very confusing here). It would be more convincing to the proposed method evaluated in a more realistic setting.*\\\"\\n\\n**Authors' Response**:\\n\\nIn this paper, we followed the same experiment settings of the pioneering works on the OOD generalization (i.e., IRM and IRMG) to conduct all the experiments for the fair comparison. The main goal of our experiments in the paper is to **CONCEPTUALLY** demonstrate that the proposed method can enable the OOD generalization in the nonlinear setting.\\n\\n**Question 5.**\\n\\n\\\"*Further, there are no (synthetic) experiments which confirm that the proposed method does in fact learn the causally relevant latent variables and its robustness in doing so. Since identifying the correct causal latent variables requires multiple conditional independence tests and bivariate causal discovery methods (on latent variables which may be estimated incorrectly), there is an obvious concern about how robust this procedure is in practice. It would be more convincing to see (e.g.) precision and recall with regard to selecting the correct causal latent variables when the ground truth is known.*\\\"\\n\\n**Authors' Response**:\\n\\nConsidering that the ground truth of the causal latent variables in the image experiments is unknown, we also conducted the experiments on the fully synthetic data in Section 5.1. In Appendix G, we provide an in-depth analysis on our approach, including the analysis on the importance of Assumption 1 and on the necessity of iVAE in Phase 1, how accurately and robustly the direct causes can be recovered in Phase 2, and how well the two optimization problems can be addressed in Phase 3.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"Thank you for your feedback.\\n\\n**Question 1.**\\n\\n\\\"*How to verify assumption 1 in a real case? Although the authors argued this assumption is not very restrictive and similar to the assumption in the iVAE paper, I think there is a difference between these two papers. In the iVAE paper, they assumed the latent variables to be conditionally factorial, while here the authors assume the potential causals (unobserved variables) are independent.*\\\"\\n\\n**Authors' Response**:\\n\\nIn fact, following the iVAE paper, in this paper we also assume the latent variables to be conditionally factorial, which is formally stated in Assumption 1. \\n\\nNote that, like many other areas (e.g., healthcare, epidemiology, medicine, etc.) in causality, the only way to verify the assumed causal diagram is through experiment. Empirical results demonstrate that this assumption works quite well. \\n\\n**Question 2.**\\n\\n\\\"*After discovering direct causes, they still need the IRM phase to learn an invariant predictor. IRM itself can identify spurious causes and learn an invariant predictor, so what is the gain of learning the first two phases? What if some spurious causations are wrongly detected by the second phase, will it affect the predictor?*\\\"\\n\\n**Authors' Response**:\\n\\nCompared to IRM, our method has at least two advantages. \\n\\nFirst, the challenging bi-leveled optimization problem in IRM can be reduced to two simpler independent optimization problems: (i) learning the invariant data representation $\\\\Phi$ from $O$ to $\\\\text{Pa}(Y)$, and (ii) learning the invariant classifier $w$ from $\\\\text{Pa}(Y)$ to $Y$. Both (i) and (ii) can be separately performed in a more efficient and effective manner. \\n\\nSecond, our method has generalization guarantees in the nonlinear setting, whist IRM only works in the linear setting. This guarantee come from the basic idea that for both (i) and (ii), since there exist no spurious correlations between ${O}$ and $\\\\text{Pa}({Y})$ and between $\\\\text{Pa}({Y})$ and ${Y}$, learning theory guarantees that in the limit of infinite data, we will converge to the true invariant data representation $\\\\Phi$ and the true invariant classifier $w$.\\n\\nIf some spurious causations are wrongly detected by the second phase, it will affect the predictor for sure. \\n\\n**Question 3.**\\n\\n\\\"*The synthetic data experiment is not convincing at all. ICRL outperforms ERM and IRM in a very extreme case, where all the algorithms perform terribly, I don't think I can conclude ICRL is a better algorithm among others from this test case. If the authors can visually show the invariant representation of ICRL is more robust, that would be a good illustration.*\\\"\\n\\n**Authors' Response**:\\n\\nConsidering that the ground truth of the causal latent variables in the image experiments is unknown, we also conducted the experiments on the fully synthetic data in Section 5.1. In Appendix G, we provide an in-depth analysis on our approach, including the analysis on the importance of Assumption 1 and on the necessity of iVAE in Phase 1, how accurately and robustly the direct causes can be recovered in Phase 2, and how well the two optimization problems can be addressed in Phase 3.\\n\\n**Question 4.**\\n\\n\\\"*In the colored MNIST experiment, I assume the setting is the same as the IRM paper. While they said their IRM can reach 66.9+-2.5 acc, which is 7 percent higher than this paper and even higher than ICRL. So I wonder what causes this gap.*\\\"\\n\\n**Authors' Response**:\\n\\nSince the IRMG paper includes more baselines, for a fair comparison we followed their experimental setting and directly used their dataset. The baseline results in our paper directly come from the IRMG paper. The gap might be caused by the preprocessing methods used in the IRMG paper while creating the datasets.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thank you for your comments.\\n\\n**Question 1**: \\n\\n\\\"*I don't see how Assumption 1 can be justified as a plausible assumption in the general case where more than one latent variable is a direct cause of the outcome.*\\\"\\n\\n**Authors' Response**:\\n\\nIn fact, Assumption 1 applies to the general case where more than one latent variable is a direct cause of the outcome. Let us explain this point more clearly. \\n\\nConsider the example you mentioned that \\u201cIf both $X_1$ and $X_2$ are direct causes of $Y$, generically they will not be independent conditional on $Y$ and $E$\\u201d. This is absolutely true. In this case, $X_1$ and $X_2$ will be coupled together and treated as one variable. Without loss of generality, let us assume that $X_2$ is absorbed into $X_1$. Similarly, if $Y$ has more than two direct causes, all the other causes will be absorbed into $X_1$. Now, the question goes to how to represent the variable $X_1$ so that it is flexible enough to contain the multiple direct causes of $Y$. \\n\\nIf the data is simple, it is enough that $X_1$ is a one-dimensional continuous variable.\\n\\nIf the data is complex, we can let $X_1$ be a multi-dimensional continuous variable, say m-dim. Further, for simplicity we can assume that all $X_i$ is a m-dimensional variable. In this case, our approach will not change except replacing one-dimensional $X_i$ with m-dimensional $X_i$.\\n\\nWe have updated Section 3.2 to make it clearer.\\n\\n\\n\\n**Question 2.**\\n\\n\\\"*Theorem 4 is less than fully rigorous and is misleading. For example, the proof of Theorem 4 invokes a method for inferring causal directions in Zhang et al. (2017), but as far as I know, that method does not yet have a rigorous theoretical justification.*\\\" \\n\\n**Authors' Response**:\\n\\nThank you for the comment. In order to avoid such a kind of confusion, we have clarified in the revision that the method for inferring causal directions in Zhang et al. (2017) is a heuristic one without a rigorous theoretical justification yet.\"}",
"{\"title\": \"Interesting approach to an important problem, but with a substantial flaw\", \"review\": \"This paper proposes a method for learning invariant (nonlinear) data representations and classifiers, using data from multiple domains. A key step in the method is to discover the direct causes of the outcome of interest from a set of latent variables that are recovered from observed variables via identifiable VAE. The problem being tackled is significant, and the general idea is interesting and sensible. The empirical results also look encouraging. However, there appears to be a major flaw in the theoretical setup. In order to apply identifiable VAE, the method needs to assume that any two latent variables are independent conditional on the outcome variable and the domain index (Assumption 1). But what about latent variables that are direct causes of the outcome variable? If both X_1 and X_2 are direct causes of Y, generically they will not be independent conditional on Y and E, will they? In the motivating example, only one latent variable is a direct cause of the outcome, so this issue does not arise, but the ambition, as I understand it, is to handle any number of direct causes. I don't see how Assumption 1 can be justified as a plausible assumption in the general case where more than one latent variable is a direct cause of the outcome.\\n\\nMoreover, Theorem 4 is less than fully rigorous and is misleading. For example, the proof of Theorem 4 invokes a method for inferring causal directions in Zhang et al. (2017), but as far as I know, that method does not yet have a rigorous theoretical justification. As it is formulated, Theorem 4 sounds like a theoretical identifiability result, and as such is not rigorously established by the proof given in the paper.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting topic but the assumption and the experiments are not convincing.\", \"review\": \"This paper proposes an invariant causal representation learning paradigm in the nonlinear setting. Based on a conditional factorial assumption, they proved identifiability up to a linear transform. The ICRL objective, in this case, is able to discover all the direct causes of the outcome, and thus enables OOD generalization.\\n\\nThe novelty of the paper seems to be in the generalization of the IRM framework to the nonlinear case which is interesting to me. \\nThe authors combined iVAE and IRM to solve this problem. Overall the paper is clearly written and easy to follow, but some conceptual issues remain.\", \"here_are_my_issues_with_the_paper\": [\"How to verify assumption 1 in a real case? Although the authors argued this assumption is not very restrictive and similar to the assumption in the iVAE paper, I think there is a difference between these two papers. In the iVAE paper, they assumed the latent variables to be conditionally factorial, while here the authors assume the potential causals (unobserved variables) are independent.\", \"After discovering direct causes, they still need the IRM phase to learn an invariant predictor. IRM itself can identify spurious causes and learn an invariant predictor, so what is the gain of learning the first two phases? What if some spurious causations are wrongly detected by the second phase, will it affect the predictor?\", \"-The synthetic data experiment is not convincing at all. ICRL outperforms ERM and IRM in a very extreme case, where all the algorithms perform terribly, I don't think I can conclude ICRL is a better algorithm among others from this test case. If the authors can visually show the invariant representation of ICRL is more robust, that would be a good illustration.\", \"In the colored MNIST experiment, I assume the setting is the same as the IRM paper. While they said their IRM can reach 66.9+-2.5 acc, which is 7 percent higher than this paper and even higher than ICRL. So I wonder what causes this gap.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Proposes some interesting ideas, but the novelty is limited and experimental results are unconvincing\", \"review\": \"The paper proposes Invariant Causal Representation Learning, which seeks to learn representations for downstream tasks that are based on only causally invariant latent variables so the representation is robust to shifts in the test environment.\\n\\nA model is assumed where an environment variable is a cause of all the latent variables and a target. The iVAE algorithm is used to learn the latent variable model. Then, a series of conditional independence tests and bivariate causal discovery methods are used to distinguish which latent variables correspond to causes (effects) of the target. Finally representations are learned from the observed variables to the causal latent variables of the target and then from these variables to the target.\\n\\nThe approach is evaluated using synthetic data and semi-synthetic data based on MNIST.\\n\\nThe clarity and organization of the paper could be improved. The algorithm should be moved from the appendix to the main text and the procedure should be described more holistically to give the reader an outline before diving into the details of each component section. The experiments section is also very unclear.\\n\\nThe main issue that remains unclear to me is how the environment variable E is being used explicitly. It doesn\\u2019t seem clear to me that you would generally have access to E, but it is required in the rules necessary to determine the direct causes of Y. What are the E variables being used in each of the experiments?\\n\\nThe novelty seems somewhat limited. It seems the theoretical results can be divided into (a) results about identifiability of the latent variable model and (b) the method for identifying the direct causes. Some of (a) follows directly from Khemakhem et al. (2020) - it is difficult to determine whether there is sufficient novelty here. (b) follows directly from well known constraint-based and bivariate causal discovery approaches. \\n\\nThe experiments are unconvincing. The proposed method outperforms existing approaches in a high noise synthetic data example and a kind of adversarial example where grayscale MNIST is colored in a way that is strongly correlated with the class label (the experimental setup and evaluation metric is very confusing here). It would be more convincing to the proposed method evaluated in a more realistic setting.\\n\\nFurther, there are no (synthetic) experiments which confirm that the proposed method does in fact learn the causally relevant latent variables and its robustness in doing so. Since identifying the correct causal latent variables requires multiple conditional independence tests and bivariate causal discovery methods (on latent variables which may be estimated incorrectly), there is an obvious concern about how robust this procedure is in practice. It would be more convincing to see (e.g.) precision and recall with regard to selecting the correct causal latent variables when the ground truth is known.\\n\\nIn summary, the paper introducing some interesting ideas, but the clarity could be improved, the novelty may be somewhat limited and the experimental results could be improved.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
VRgITLy0l2 | A priori guarantees of finite-time convergence for Deep Neural Networks | [
"Anushree Rankawat",
"Mansi Rankawat",
"Harshal B. Oza"
] | In this paper, we perform Lyapunov based analysis of the loss function to derive an a priori upper bound on the settling time of deep neural networks. While previous studies have attempted to understand deep learning using control theory framework, there is limited work on a priori finite time convergence analysis. Drawing from the advances in analysis of finite-time control of non-linear systems, we provide a priori guarantees of finite-time convergence in a deterministic control theoretic setting. We formulate the supervised learning framework as a control problem where weights of the network are control inputs and learning translates into a tracking problem. An analytical formula for finite-time upper bound on settling time is provided a priori under the assumptions of boundedness of input. Finally, we prove that our loss function is robust against input perturbations. | [
"priori guarantees",
"convergence",
"deep neural networks",
"analysis",
"loss function",
"lyapunov",
"priori upper bound",
"settling time",
"previous studies",
"deep"
] | Reject | https://openreview.net/pdf?id=VRgITLy0l2 | https://openreview.net/forum?id=VRgITLy0l2 | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"lyl-5m9syZd",
"S8rjXtjwDkm",
"PiCmxUV2HYj",
"v2i53SgfkX",
"1VPaHRsWV_L",
"MMp--YKW2lv",
"jycZMY-K06E",
"9knqT2qPOdY",
"9jmFVevRyoy",
"YzWdj58SZDr",
"OaGQC2Ge0yb",
"eYO6UnyXyLC",
"tYlEQok6A6P"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040368847,
1606222984239,
1606146627358,
1605752839070,
1605752775314,
1605752707262,
1605752583196,
1605167481141,
1605146341787,
1604048778713,
1603809071949,
1603759999193,
1603675835075
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3441/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3441/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3441/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3441/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3441/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3441/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3441/Authors"
],
[
"~Mouhacine_Benosman1"
],
[
"ICLR.cc/2021/Conference/Paper3441/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3441/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3441/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3441/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This paper aims to study the convergence of deep neural networks training via a control theoretic analysis. This is a very interesting approach to establish theoretical understanding of deep learning. However, there are several concerns raised by the reviewers:\\n\\n1.\\tThe contribution of this paper is limited. The results simply follow from standard optimal control. It is not clear what new insight the paper provides.\\n2.\\tThere are already quite a few works on control theoretic analysis of deep learning. This paper did not do a good job on presenting its novelty and difference with existing works.\\n3.\\tThe experimental part is weak. It only involves small data set and very simple networks.\\n\\nBased on these, I am not able to recommend acceptance for the current manuscript. But the authors are encouraged to continue this research.\"}",
"{\"title\": \"Manuscript and responses to reviews have been updated\", \"comment\": \"Dear Reviewers and all,\\n\\nWe have revised our manuscript based on all the comments provided to us. We thank the reviewers for their comments and would like to welcome them to review our updated manuscript and let us know their views/suggestions or any concerns. \\n\\nWith Regards,\\nAuthors\"}",
"{\"title\": \"Response: Using Lyapunov function to model training is interesting, but scalability of the results is an issue.\", \"comment\": \"We have updated Table 3 with the convergence bounds and experimental convergence time on larger dataset (0.5 million images).\"}",
"{\"title\": \"Response: Using Lyapunov function to model training is interesting, but scalability of the results is an issue.\", \"comment\": \"Thank you for your time to read and review our manuscript.\\nThe assumption that we have used to derive our theorems is that atleast one of the input dimensions should be greater than zero and the all input dimension values should be finite i.e. less than some scalar value \\u2018a\\u2019. While training any machine learning task, the inputs are usually normalized (ranges from -1 to 1 or 0 to 1), hence the assumption does not seem unreasonable for any machine learning application. This assumption holds true even in batched case, as the inputs will be normalized to some finite range. Thus, our algorithm does not depend or have any restrictions on choice of samples in a batch.\\nWe include experimental results (test rmse, error plots) for larger dataset (0.5 million images) in the modified submission on Page 9. The convergence bounds and experimental convergence time will be updated in the table in a couple of days.\"}",
"{\"title\": \"Response: Review\", \"comment\": \"Thank you for your time to read and review our manuscript.\\nThe main innovation in this paper is to model neural networks as a dynamical control system. Recasting the problem of supervised learning as a dynamical control problem has several benefits. For example, it becomes possible to compute a priori convergence bounds, simpler hyper-parameter optimization and a finite-time convergent weight update. The main argument of the paper is not to use a particular loss function, but to demonstrate that by treating the loss function as a Lyapunov function and modifying the weight update accordingly, the supervised learning framework can benefit from the well-known methods of dynamical control systems. As for empirical generalization of the Lyapunov loss, please see the comparison of test error plots of the proposed Lyapunov loss function and that of traditional loss functions in the revised manuscript on page 6, 7, 8 and 9. \\nThank you for your comments on the loss plots. We have modified the plots so that they are normalized by the respective maximum of the loss function (which happens to be at the initial condition). We have also included these normalized plots of test error in the revised manuscript. We believe the information conveyed by such a comparison is the rate at which the proposed weight update converges. This has direct implication to the learning problem. \\nThe results hold for all activation functions which are once differentiable. We mentioned a particular activation function so as to show detailed derivation with backpropagation for the weight update equation. Looking at equation (8) of the manuscript, it is a requirement to have the activation function \\u03c3 such that the partial derivative with respect to weight w exists. All such activation functions are admitted by the proposed theory.\"}",
"{\"title\": \"Response: Interesting connections between neural network training and control, as well as control-theoretic analysis of the neural network loss function\", \"comment\": \"Thank you for your time to read and review our manuscript.\\ni) The target output is denoted by $y^{\\\\ast}$ for a given supervised learning task. The output of the neural network is y. The weight updates are such that y tracks $y^{\\\\ast}$ in finite time. In an analogy with control systems, y stands for the output of the system and $y^{\\\\ast}$ the commanded signal. In optimization, we try to minimize a cost / performance criteria where the weight update is obtained using equation (13). In our control theory based formulation, we have treated the cost / loss as a Lyapunov function. Then, the weight update is treated as a control signal for the plant (neural network) such that the temporal derivative of the Lyapunov function is always negative definite. Hence, the optimization is achieved via proper control synthesis.\\nii) The analysis in Section 2.1 corresponds to the case of a single neuron. The motivation behind providing that analysis is to demonstrate the main concept of finite-time convergent learning. This motivates the development of a more complex multi-neuron case. Of course, the single neuron case is not practically useful. \\n Both single and multi neuron case analysis hold for multiple data points. Theoretically, both theorems admit multiple data points. Empirically, we demonstrate this for the single neuron case in section 3 Figure 2, where we trained the single neuron case with Iris dataset (80 data points in training). For multi-neuron case, we use larger datasets and show that the analysis works for multiple data points (Page 9, figure 4). In case we have misunderstood what was meant by a single datapoint (we assume it means one input, output pair in the dataset), we welcome the reviewer to clear our misunderstanding.\\niii) We have included the corresponding test results in the modified submission in Figure 1, 2, and 3.\"}",
"{\"title\": \"Response: A priori guarantees of finite-time convergence for Deep Neural Networks\", \"comment\": \"Thank you for your time to read and review our manuscript.\\n Most of the real world datasets or tasks have a defined range of input values. For example, image values range from 0 to 255. Also, when the input is provided to the network, it is usually normalized, hence the boundedness of assumption holds true for a wide range of tasks and datasets. \\nWe have included graphs on test data and accuracy in the revised manuscript. \\nRegarding classification tasks, yes these results can be extended to classification tasks as well. However, it is needed to identify suitable Lyapunov functions that result in a continuous finite-time update. Certainly, more work is needed before the presented results become applicable for classification tasks. \\nFrom a control theoretic perspective, a more aggressive controller will usually produce a large overshoot while tracking a command input. In the present case, the weight update is being treated as the controller. Hence, setting the gains $k_{ij}$ large will result in oversensitive training. Setting \\u237a\\u226a1 close to zero will also result in a highly non-Lipschitz training update. These scenarios correspond to overfitting. This can be seen, for example, in the limiting case (not covered by theory) in appendix A where \\u237a=0 causes the oscillations in training update due to limitations of explicit Euler discretization. It is therefore recommended to have a judicious balance between the required convergence rates and the ensuing overfitting of the network.\\n\\nWe agree that the bounds are conservative in nature. This is due to the fact that the upper bound on settling time is computed using Lyapunov derivative. In future work, we plan to show how to derive less conservative bounds.\"}",
"{\"title\": \"Discussion on novelty and discretization\", \"comment\": \"Thank you for a detailed reading and review of our work. We appreciate your comments.\\n\\nAs for the title of the comment, indeed, the very idea of Lyapunov functions for continuous finite-time optimization is not new. We do not intend to claim it as our contribution. We also agree that we have not identified a new Lyapunov function. We have stated on the second page that our focus is on posing the learning problem as a control theoretic problem where existing Lyapunov theory is utilized. Perhaps the novelty claim made in Introduction is misleading and we agree to change it as follows: \\u201cThe novelty lies in the fact that the weight update is cast as a finite-time control synthesis such that the loss function is proven to be a valid Lyapunov function\\u201d. Thank you for this comment.\\n\\nHowever, we do stand by our claim that Deep Neural Networks have not been rigorously studied from a control theory perspective. In fact, the focus of the ICML 2020 paper being cited in your comment is on continuous and discontinuous differential equations and inclusions arising during optimization of cost functions. The main focus also seems to be on discretization. Theoretically, the discontinuous case in our results arises only when alpha=0 when the differential inclusion has to be considered in the sense of Filippov\\u2019s definition . We would like to stress that our theorems do not cover this case. The solutions of the differential equations are understood as defined in Bhat and Bernstein (SIAM, 2000). Of course, with very small values of 0<alpha<1, it is well known that the right hand side of the differential equation becomes non-Lipschitz and numerical methods may not give a solution that matches the corresponding analytical one especially in the presence of disturbances. Hence, we do not understand the comment why our results are incorrect for the case when alpha is nonzero. \\n\\nAs for the comment on superfluousness of continuous time optimization, we would like to point out a few references that help bridge the divide between the continuous and discrete-time analysis that the comment alludes to. We would also like to point out the latest advances in discretization algorithms for discontinuous cases (alpha=0). The implicit numerical schemes reported by Vincent Acary, Bernard Brogliato (\\u201cImplicit Euler numerical scheme and chattering-free implementation of sliding mode systems\\u201d in Systems and Control Letters, 2010) prove that the analytical and numerical solutions match after a finite number of samples at least for the case when there is no disturbance in control system. This result also extends to the multivariable case. These results relate to differential inclusions such as equation (8) in the cited ICML 2020 paper. Another recent relevant result on implicit numerical schemes is given by Brogliato et. al. (The Implicit Discretization of the Super-twisting Sliding-Mode Control Algorithm in IEEE Transactions on Automatic Control, 2020). As for the catch mentioned on explicit discretization, explicit Euler discretization results are given by Barbot et. al. (\\u201cDiscrete differentiators based on sliding modes\\u201d in Automatica, 2020) for a discontinuous case arising in sliding modes. This reference establishes optimal accuracy asymptotics of their continuous-time counterparts. This reference also encompasses continuous non-Lipschitz right hand sides. In the presence of these results, we do not agree that it is superfluous to apply continuous finite-time methods to the case of DNN when several discretization methods are available, including the one proven in ICML 2020 reference.\"}",
"{\"title\": \"The results of finite-time optimization using signed first order flows and Lyapunov function costs are not novel, sorry !\", \"comment\": \"This is a nice attempt to apply results of continuous optimization in finite-time to the case of DNNs\\u2019 training.\\n\\nHowever, the authors are invited to compare this work to the recent work Romero et al . ICLM 2020 ( https://proceedings.icml.cc/static/paper_files/icml/2020/4879-Paper.pdf) on the subject of continuous finite-time optimization. Indeed, it appears to me that the Lyapunov cost that the authors are arguing to be novel is simply the Lyapunov function used in this ICML paper (see Proof of Theorem 1 sketch ), where the function $f$ is replaced with the output of the DNN in this particular case. The optimization flow its self is very similar to the signed flow introduced in this ICML paper, referred to as q-SGF, leading to similar finite-time convergence results.\\n\\nBesides, it also appears to me that the theoretical analysis in this submission is incorrect due to the potential discontinuity of the optimization flow. Indeed, the authors are clearly stating that the acceleration (in continuous time) observed numerically is due the \\u2018aggressive\\u2019 discontinuous flow. Well, that might be true, but that discontinuity needs to be carefully studied, since the argument that the authors are using in their Lyapunov analysis is only valid for Lipschitz continuous flows. For discontinuous flows, one must use the notion of differential inclusion for example, and the associated Lyapunov theory, please refer to the supplementary material of the ICML paper cited above (can also be found in the more general version of the work, which includes the case of time varying cost functions at https://www.merl.com/publications/docs/TR2020-088.pdf ).\\n\\nFinally, it seems to me that the notion of continuous time optimization is rather superfluous in the context of DNNs due to their large scale. Indeed, one cannot expect to use a stiff ODE solver to be able to solve the discontinuous flows with the very high dimensions associated with DNNs. As such this work is not suitable for such application, and a proper discretization scheme is needed for that. The catch is, it is far from proven that any explicit discretization will lead to the same finite-time convergence result.\"}",
"{\"title\": \"A priori guarantees of finite-time convergence for Deep Neural Networks\", \"review\": \"The paper aims to make strides towards a theoretical understanding of Deep neural networks, which remains elusive to date. This paper uses a control theoretic formulation to analyze the convergence rate of deep neural networks. More specifically, a Lyapunov based analysis of the loss function is used to derive a priori upper bound on the settling time of a restricted set of fully connected neural network architectures with some assumptions on the input space.\\n\\nI'm interested to know, for what kind of real-world tasks or datasets is their assumption on the boundedness of the input valid?\\nAlthough the proposed Lyapunov loss provides the possibility of analyzing convergence guarantees a-priori, how does this affect the performance of the underlying model on test data? \\n\\nThe paper provides experiments supporting their theoretical claims for MLPs on a regression task and a single neuron on a classification task. They show that their proposed Lyapunov loss converges faster than the L1 and L2 losses, and faster than the a-priori upper bound. Can a similar loss function for MLPs on classification tasks be easily derived? In other words, do these results easily extend to classification tasks?\\n\\nAnd what effect does the new loss have on overfitting?\\n\\nI'm a bit confused by the theoretical upper bound. The derived upper bounds in Table 1 are orders of magnitude higher than the actual time taken, even with the L1 and L2 losses. What does this mean? What's the use of the upper bound in this case?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting connections between neural network training and control, as well as control-theoretic analysis of the neural network loss function\", \"review\": \"This paper presents a Lyapunov based analysis of the loss function in neural network training and derives a priori upper bounds on the settling time of the training, which somewhat complements existing studies. The supervised neural network learning problem is formulated as a control problem with the weight parameters being the control input, and the learning problem as a tracking problem. Analytic formula for computing the finite-time upper bound on the settling time is provided under suitable assumptions on the input. Furthermore, the loss function is also shown robust against input perturbations.\\n\\nThis paper contains some interesting ideas in revealing relationships between control and neural network learning, which is a plus. Hopefully, this can further motivate exploration and application of more control-theoretic tools to understanding of neural network training. Although this paper is fairly readable, the presentation and organization can be improved. Several detailed comments are provided below. \\n\\ni) In control, particularly tracking problem, it is known that there is a given reference signal y(t) one wants the control plant to track. Nonetheless, here in the discussion the y^\\\\ast I guess is determined by the loss function, training method, data, as well as the neural network architecture altogether, right? What exactly is this y^\\\\ast? How is it related to e.g., the equilibrium point of the weight parameters and stationary points in the optimization context? \\n\\nii) The current analysis in Section 2 pertains to a single data point? How would having more data points affect the analysis and the results? In that case, what would be the y^\\\\ast? Or will be functions of the input? \\n\\niii) In the experiments, since the Lyapunov loss function and other loss functions are plotted, how does the loss function convergence correspond to the learned weights parameters? In the context learning, one is more interested in the neural network parameters that not only capture the training data but also predict well the unseen ones? So it would also be interesting to present the corresponding testing results?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Review\", \"review\": \"This work studies the finite-time convergence for neural networks. In particular, it tries to recast the problem of training neural networks as a control problem. Supervised learning is then reformulated as a non-linear control problem with a Lyapunov based loss. The weight update is then transformed to be the control inputs. Finally, convergence results are obtained with standard theory from non-linear systems.\\n\\nOverall, connecting neural networks with classical control theory is an interesting direction. However, results presented in this paper seems limited, and it is not clear what contributions the current work really bring to the community.\\n(1) there does not seem to be enough innovation in this paper. To me, the result simply follows from the classical control theory. The authors simply try to mimic the theory by having a candidate Lyapunov loss and continuous weight update equations. It is not clear why these are used for neural networks at the first place; rather, it seems that these are only applied for the sake of proving some technical results. For example, does the candidate Lyapunov loss actually generalize better (theoretically or empirically)? what's the property of it? Howe does it compare to traditional loss function? It is not convincing for me why someone should use it for training neural networks. It seems to be an artifact used solely for the theorem.\\n(2) relatedly, the experiments focus on plotting the training loss of Lyapunov loss and l1/l2 loss. From my perspective, this is not informative. The loss function is likely not on the same scale; it is probably better to plot a \\\"normalized\\\" version so that the comparison indeed makes sense. Again, such a comparison does not reveal any interesting property about the new loss, e.g.., generalization/testing error? \\n(3) Please clarity to what extent, the results hold with respect to different activation function. In Section 2.1, it is explicitly mentioned that sigmoid activation is used. In Section 2.2, the authors use the same notation \\\\sigma. Does the result hold for other common activation functions? If not, any comments on the difficulties?\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Using Lyapunov function to model training is interesting, but scalability of the results is an issue.\", \"review\": \"The authors in this paper make an attempt in providing finite time convergence guarantees of the training process of neural networks, using ideas from control theory. The loss function in the training process is framed as a Lyapunov function. The training process at each time step is seen as assigning dynamics to the Lyapunov function over time. The convergence of which can then be analyzed using standard control theoretic techniques.\\n\\nFor some fixed input - output target, the idea is to come up with a weight update rule which guarantees the convergence rate with some assumptions on the inputs. This is the novelty in the paper. The extension to multi-layer case is an extension of the back propagation algorithm. \\n\\nThough the above is an interesting contribution in itself, I am not convinced that the results for a fixed input case would generalize well to the batched input case. Which in my opinion is more general, and has enabled the training of large scale neural networks. The authors have analyzed the robustness to perturbations in Section 2.4. Specifically Eqn 22. Where the authors have bounds on the perturbation limits, under which it can still guarantee convergence rates. For the batched case it might need some restrictions on the choice of samples in a batch. \\n\\nThe next concern I have is the experiments definitely looks very insufficient. The current experiments include much smaller datasets. I would be interested to see how this technique performs on some of the larger neural network architectures. Since convergence guarantees become more important, only when the time it takes to train a network is much longer.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
NlrFDOgRRH | Distributed Associative Memory Network with Association Reinforcing Loss | [
"Taewon Park",
"Inchul Choi",
"Minho Lee"
] | Despite recent progress in memory augmented neural network research, associative memory networks with a single external memory still show limited performance on complex relational reasoning tasks. The main reason for this problem comes from the lossy representation of a content-based addressing memory and its insufficient associating performance for long temporal sequence data. To address these problems, here we introduce a novel Distributed Associative Memory architecture (DAM) with Association Reinforcing Loss (ARL) function which enhances the relation reasoning performance of memory augmented neural network. In this framework, instead of relying on a single large external memory, we form a set of multiple smaller associative memory blocks and update these sub-memory blocks simultaneously and independently with the content-based addressing mechanism. Based on DAM architecture, we can effectively retrieve complex relational information by integrating diverse representations distributed across multiple sub-memory blocks with an attention mechanism. Moreover, to further enhance the relation modeling performance of memory network, we propose ARL which assists a task's target objective while learning relational information exist in data. ARL enables the memory augmented neural network to reinforce an association between input data and task objective by reproducing stochastically sampled input data from stored memory contents. With this content reproducing task, it enriches the representations with relational information. In experiments, we apply our two main approaches to Differential Neural Computer (DNC), which is one of the representative content-based addressing memory model and achieves state-of-the-art performance on both memorization and relational reasoning tasks. | [
"memory augmented neural network",
"distributed memory",
"memorization",
"relational reasoning"
] | Reject | https://openreview.net/pdf?id=NlrFDOgRRH | https://openreview.net/forum?id=NlrFDOgRRH | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"3Vr8QfRC-9G",
"i90WIjrbUoa",
"g6Nt5hnPg34",
"oAqC7JrrUR5",
"RfaMubSXjgp",
"C6m4DBMoYl",
"Wq92zDZiJOo",
"T0jy3mLapvT",
"ULNcDwAjNEB",
"nJWw47b_QUl",
"SsZKkZ0NH_C",
"ND7JPmeyfB",
"dnW4AgXr-Te",
"pjzuR2kI7yF",
"1Y4V9m5xNl5",
"FjqfWUIWJNm",
"SkJYmuaeiGi",
"WlRyjzrpPvt",
"vz61GfmB8lS",
"rH2iYWNpRPM",
"GLesDNQuH_k",
"Y6ui8syiwWE",
"19xJPGRdq47",
"OX9N9PHnEb1",
"RWrfnJfg9eM"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040440445,
1606217962032,
1606119098725,
1606119053086,
1606119021064,
1605950133296,
1605950098171,
1605724708443,
1605724412246,
1605722814766,
1605722762101,
1605722670901,
1605721559480,
1605718375270,
1605718138400,
1605718069213,
1605718025525,
1605715817500,
1605715776201,
1605715668164,
1604661640962,
1603976707822,
1603974353766,
1603950781290,
1603807945946
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3439/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3439/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3439/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3439/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3439/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3439/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3439/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3439/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3439/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3439/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3439/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3439/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3439/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3439/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3439/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3439/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3439/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3439/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3439/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3439/AnonReviewer5"
],
[
"ICLR.cc/2021/Conference/Paper3439/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3439/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3439/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3439/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"After carefully reading the reviews and the rebuttal, and after going over the paper itself, I'm not sure the paper it ready for ICLR. I do believe there is a lot of useful content in the current manuscript, and I urge the authors to keep working on the manuscript and resubmit it in due time.\", \"my_concerns_are_as_follows\": \"(a) there is a lot of discussion about *relational information retrieval* -- however there is lack of any formalization of what this term means. I don't mind relational reasoning to be used as motivation, but when it is used to consider what are valid baselines and what are not, I feel compelled to understand what exactly it means. Why is *self-attention* retrieval not *relational*? Beside the task being seemingly relational in spirit, how do we test whether the retrieved mechanism carries any relational information whatsoever? I think the community had a learning lesson here in CLEVER dataset, which arguably does not require as much relational reasoning as it seemed. So I agree with Rev5, that there is a decent probability that the task we are using do not require relational information retrieval. While I understand that some of these systems are Transformer inspired, I feel transformer should be a baseline. \\n (b) I also feel the paper should take one of two paths. \\n - Either embrace larger scale tasks and baseline outside of the relational reasoning literature (like transformer) and particularly settings where potentially self attention will struggle due to the quadratic term or where they tend to be hard to train due to the difficulty of doing credit assignment through the attention mechanism \\n - Provide more careful ablation studies and formalize the claims a bit more. Regarding e.g. the discussion of a single larger memory vs multiple memory blocks. One of the main difference comes from the attention over which memory block to use in the proposed approach, which due to softmax has a unimodal behavior. So is the reason why it works better this potential hiding of part of the memory representation (so a better way of reading a subset of the memory entry). This could potentially be done differently (e.g. multiplicative interaction in the same style, for e.g. that they were used in WaveNet). This is just a random thought on this particular aspect. I have similar questions about the self-supervised loss. \\n\\n I find the paper focusing on improving performance (unfortunately on toy domains) rather than ablation studies and an understanding and careful understanding of how things works. I realize there is some such analysis in the appendix. But I feel more of it should be in the main text. The paper is either proposing something that scales and works well at scale (and then understanding why is less important as it has direct application) or explores a very specific phenomena and then is fine to stay on toy tasks but there should be a bit of clarity in the claims, and an investigation whether the hypothesis (or intuition) put forward initially is the reason why the model works.\"}",
"{\"title\": \"Summary of revision\", \"comment\": [\"We appreciate all reviewer's constructive comments and feedback. We updated our paper according to the reviewer's concerns as follows.\", \"We,\", \"Updates $N^{th}$ Farthest result in Table 1.\", \"Add experimental results on Convex hull task (relational reasoning task) in Table 3.\", \"Add visualization of DAM's memory operation with respect to attentive gates in Appendix E.\", \"Describe the link between equations of Section 2 and equations of Section 3 [Response to Reviewer 3]\", \"Add descriptions of ARL for clarity in Section 3.2 [Response to Reviewer 1, 2, and 5]\", \"Add explanation about loss function used for ARL in Section 3.2 and Appendix B [Response to Reviewer 1 and 5]\", \"Add additional analysis on algorithmic tasks in Appendix G [Response to Reviewer 5]\", \"Add Transformer results for bAbI task [Response to Reviewer 2]\"]}",
"{\"title\": \"Have additional concerns?\", \"comment\": \"Dear AnonReviewer 2,\\n\\nWe believe that we have addressed your concerns and clarified some of your points and revised our paper accordingly. Could you please let us know if you have any additional concerns or questions? We would be happy to provide further revisions or experiments to address any remaining issues and would appreciate a response from you for an updated impression.\"}",
"{\"title\": \"Have additional concerns?\", \"comment\": \"Dear AnonReviewer 1,\\n\\nWe believe that we have addressed your concerns and clarified some of your points and revised our paper accordingly. Could you please let us know if you have any additional concerns or questions? We would be happy to provide further revisions or experiments to address any remaining issues and would appreciate a response from you for an updated impression.\"}",
"{\"title\": \"Have additional concerns?\", \"comment\": \"Dear AnonReviewer 5,\\n\\nWe believe that we have addressed your concerns and clarified some of your points and revised our paper accordingly. Could you please let us know if you have any additional concerns or questions? We would be happy to provide further revisions or experiments to address any remaining issues and would appreciate a response from you for an updated impression.\"}",
"{\"title\": \"Response to AnonReviewer2 within (2/3)\", \"comment\": \"**Thank you for your fast response.**\\n\\nHowever, AR loss literally does not need any buffer for sampled inputs. As we explained more specifically, in RNN like sequential processing models, at each time step \\u2018t\\u2019, a single item of the input sequence is provided to the model and **the model also produces output at that time step \\u2018t\\u2019 based on the accumulated information in hidden state**. For DNC it corresponds to the external memory contents. The back-propagation for model training starts to happen at the time step when a model receives a question word as input and target loss is computed. Therefore, we can compute AR loss at each time step and accumulate it before the target loss computation. This is one of the benefits of the Bernoulli trial based sampling scheme. For each time step, we toss a coin to decide whether to sample the current input or not. This is, so to speak, the Bernoulli trial. According to this coin toss, if the current input item should be sampled, then we can **immediately compute AR loss for current input from the predicted output of the model at \\u2018that time step\\u2019**. The error measure (L2 or cross-entropy) is decided by task property or data type. After AR loss computation for the current input item, **it can be stored in a single variable which accumulates all AR loss for another sampled inputs until target loss computation and back-propagation occur**. The only storage we need to keep for AR loss is a single accumulating variable. And this variable can be reset right after the current input sequence ends. Maybe you might expect to use small buffers for implementation convenience, however, the algorithm itself does not require and nothing to do with maintaining buffers. **Furthermore, AR loss is only applied while a model is in training. Therefore, it has no effect on test time**. I hope this very basic explanation helps your understanding of AR loss operations and why it does not need buffers.\"}",
"{\"title\": \"Response to AnonReviewer2 within (1/3)\", \"comment\": \"**Thank you for your fast response.**\\n\\n\\nFirst of all, our main argument is that our approaches, Distributed representation, and ARL, can be applied as simple and efficient **relational information retrieving method** for any MANN, instead of **\\u2018self-attention\\u2019** or other attention methods which exhaustively search relational information with quadratic operations. Therefore, our focus is on the method itself for searching relational information, and Transformer is not the state of the art model which use self-attention for relational reasoning. In the experimental section, we already compared our model with **self-attention based state-of-the art performance model, STM**, for all complex relational reasoning tasks. As you mentioned in your review title as your concern, **the self-attention based memory network model is majorly adopted for comparison in our paper. As far as we know, at the time of writing this paper, STM (ICML2020) is the most recent state-of-the-art performance memory network model which uses self-attention for relational reasoning tasks.**\\nAlso, as you mentioned like \\u2018Transformer **like**\\u2019 models, the value of Transformer does lie in its major mechanisms, such as self-attention and multi-head attention. Simply copy and pasting the original Transformer in a specific domain does not always produce good results. Therefore, recent MANNs are adopting the self-attention or multi-head attention concept to their model in their own way. Different from what you expected, conventional Transformer does not have very good performance as a memory network model. Especially, **it is not performing well on smaller or more structured language understanding tasks** and has weakness in complex relational reasoning which is well explained in UT[3]. In other words, conventional Transformer is far from the state-of-the-art performance on relation reasoning tasks. We also updated bAbI task result of Transformer for your reference in **Table 2**. Because of those weaknesses, **Universal Transformer, UT, is proposed by Google** who originally invented Transformer. In Universal Transformer paper, with extensive experiments, they show that **UT is superior to an original Transformer in every aspect when it comes to the tasks which require relational information memory.** Therefore, we already compared our model with a more advanced **direct** extension of Transformer model, and **\\u2018STM showed even better performance than such UT and STM is mainly adopting self-attention for relational reasoning tasks\\u2019.**\\n\\n\\nYou can reference UT paper for the full comparison between Transformer and UT. Moreover, as far as we know, at the time of writing our paper, UT is the most recent **\\u201cdirectly extended\\u201d Transformer** model **\\u201cfor relational reasoning performance as MANN (for tasks like bAbI, or structured language understanding)\\\"**. Other models are adopting the mechanisms of Transformer or focused on longer input sequence processing. If you know another example of a direct Transformer based memory network model **for the same purposed tasks**, please let us know, it will be very grateful as constructive comments and also helpful for the clarification of our work. The purpose of our experimental comparison is not the showing the comparison result of every pre-existing \\u2018Transformer like models\\u2019 to our model (which is not possible), rather showing the result of **the most related and representative memory network** model with the state-of-the-art performance on relational reasoning, **to support our main argument.**\\nAs far as we know, STM is the most recent (ICML2020) and the best performing model on relational reasoning tasks. It is adopting a full self-attention mechanism, as you wanted, and outperforming all other memory network models for relational reasoning. And it is compared with our model for complex relational reasoning tasks in our paper. Which **strongly supports our argument that our distributed representation based approach is comparable to the self-attention mechanism for memory network**. And as you pointed out in your review, your main focus is also about comparing with self-attention based model which has the state-of-the-art performance.\"}",
"{\"title\": \"Clarification\", \"comment\": \"If the AR loss needs to keep track of which inputs have been chosen so far (so that it can sample from them), then it does need to keep a buffer of some sort. I was incorrect to assume that it needed a buffer of the literal inputs, since it could instead keep indices that can be used to find these samples in the data source. Nonetheless, I don't believe it's true that there is no additional memory demand, but I now appreciate that it is negligible.\"}",
"{\"title\": \"Thanks\", \"comment\": \"Thanks for the response.\", \"there_is_a_single_comparison_to_a_universal_transformer\": \"a citation to previous results on bAbI. What about the other tasks? Also, I'm not sure that UT's are \\\"the most recent Transformer based model\\\", or even the most notable, but to be a bit more explicit in my question: why is the proposed model not compared to a conventional Transformer on all the tasks presented?\"}",
"{\"title\": \"Response to AnonReviewer2 (3/3)\", \"comment\": \"> \\\"lossy representation\\\" --> This term is used throughout, but I'm not sure it's warranted. How do we know that the distributed vector representation is truly lossy with respect to the information it must encode? In principle, nothing prevents it from being lossless. Given a complicated enough decoder, one wouldn\\u2019t just need a handful of bits to encode very complicated things losslessly.\\n\\nThe lossy representation term comes from the STM [2] (ICML2020) paper, and it does not mean that representation itself should be perfect to contain every possible information from the input sequence. We used it in the context of relational reasoning performance of stored representation. To avoid confusion, we changed it to \\u201clossy representation of relational information\\u201d. The LSTM or content-based addressing network encodes information from an input sequence to a single encoded vector. When trying to solve complex relational reasoning tasks based on sequential input data, usually such representation fails to include complex relations, such as \\u2018multi-hop\\u2019[4], that exist in the input sequence. Therefore, many researchers try to encode relational information in several formats. End-to-End memory network used 2 representations for each hop and others applied a self-attention mechanism to produce separate relational representation (RMC [1], STM, UT [3]). In this context, lossy representation means that a single representation itself does not include rich enough relational information for complex relational reasoning tasks.\\n\\n\\n\\n\\n\\n> However, even with its promising performance on a wide range of tasks, MANN still has difficulties in solving complex relational reasoning problems (Weston et al., 2015). --> There has been much work since 2015 that has improved MANN performance on these tasks. For example, the Sparse DNC, as eventually shown in the results section.\\n\\nAs shown in our experimental section, we not only included the variants of DNC but also show the result of other state of the art MANN models (RMC [1], UT [3], STM [2], MNM-p [5], MEMO [6]) which aimed to tackle relational reasoning performance with attention mechanism or relying on the different type of method such as meta-learning.\\n\\n\\n\\n\\n> Through this attention-based reading process, DAM retrieves the most suitable information for the current task from distributed representations existing in the multiple memory blocks --> As shown in this text, and as used throughout, the term \\\"distributed representation\\\" is overloaded in this work. Traditionally the term \\\"distributed representation\\\" is used to denote a vector with real-valued elements, whereas here it is used to denote a set of such vectors, \\\"distributed\\\" across multiple memories.\\n\\nAs you pointed out, the way referring the architecturally distributed memory with stored representations could mislead the reader to have incorrect perception about the term \\u2018distributed representation\\u2019 usage. Therefore, we updated the manuscript accordingly to remove confusion. In our paper, considering the traditional concept of distributed representation, we used the term distributed representation, because the main idea is coherent. The concept of distributed representation does not solely mean just denoting a vector with real values. It is more conceptually adopted based on how the vector is constructed. For example, the paragraph vector from PV-DM [7] is obtained by merging several different words that are sampled from the same paragraph to include the semantics of the paragraph. This vector is referred to as a distributed representation of that paragraph. In our case, several different encoded vectors of the same input item (word), each comes from a different representation subspace, are merged to a single vector with an attention mechanism to include more rich relational information for the following task. Therefore, the term \\u201cdistributed representation\\u201d is used based on its conceptual similarity.\\n\\n\\n\\n[4] Sukhbaatar, Sainbayar, Jason Weston, and Rob Fergus. \\\"End-to-end memory networks.\\\" Advances in neural information processing systems. 2015. \\n[5] Munkhdalai, Tsendsuren, et al. \\\"Metalearned neural memory.\\\" Advances in Neural Information Processing Systems. 2019. \\n[6] Banino, Andrea, et al. \\\"Memo: A deep network for flexible combination of episodic memories.\\\" arXiv preprint arXiv:2001.10913 (2020).\\n[7] Le, Quoc, and Tomas Mikolov. \\\"Distributed representations of sentences and documents.\\\" International conference on machine learning. 2014.\"}",
"{\"title\": \"Response to AnonReviewer2 (2/3)\", \"comment\": \"> The authors also propose a new loss that forces the memory contents to be able to predict a sample sequence of previously observed inputs. (. . .) Memory costs grow linearly with time because of the need to preserve inputs for use in the ARL loss.\\n\\nBasically, our AR loss does not introduce any buffer for previous sampled input. As mentioned in the paper, it is an additional task loss term simply added to the target objective function without any modification to the network structure. The AR loss is computed at the time of sampling each input data with the Bernoulli trial, therefore we don\\u2019t need to store whole sampled input data. Which means, it does not introduce any additional memory space, and memory cost does not grow linearly with time or sequence length because of ARL. It is only adopted during the training phase of the memory model, and it even expedites the learning speed of the given model. Also, its memory cost is much cheaper than that of Transformer since Transformer needs storage for each and every self-attention operation and there are multiple of such layers. For computational complexity, Transformer needs quadratic comparison operations for full self-attention, however, ARL only involves sampling and loss computation for only a sub-portion of the input sequence. Therefore, in terms of both memory usage and computation complexity, AR loss uses far fewer resources than Transformer.\\n\\nIf we explain more details, AR loss term enforces the memory network to reproduce sampled input data only based on the stored representation in its memory matrix. And the reproducing input data is stochastically sampled with the Bernoulli trial for each input data (with probability \\u2018p\\u2019). Because AR loss is computed at the time of sampling and added to the target loss, there is no need for buffering or memorizing the store sampled input sequence. Because the Bernoulli trial of each input data consists of Binomial sampling, the average proportion of the sampled input sequence is \\u2018np\\u2019, where n is the length of the input sequence. In other words, AR loss is trying to refresh the stored contents of memory with the percentage of \\u2018np\\u2019. This stochastic sampling scheme of AR loss prevents our model learns to simply redirect input to the output to satisfy AR loss condition, and finally enhances overall memorization performance.\\n\\n\\n\\n\\n> Altogether, the paper is well put together and written well enough to understand the ideas and experiments. The authors did well to choose experiments that would demonstrate the strengths of their approach. Unfortunately, the empirical and rational comparison to Transformer-based approaches prevents me from recommending its publication.\\n\\nAs we mentioned above, we compared our model with other most recent state of the art memory network models which adopted Transformer-like approaches (self-attention, multi-head attention). And our model shows superior or comparable performance compared to such relation seeking oriented memory network models.\\n\\nIn our paper, one of our main argument is, complex relational information exists in input sequence can be retrieved and stored for further use without the help of extensive relation searching operations, such as full self-attention in Transformer. Therefore, Transformer-like models are our counterpart for comparison, not baseline. In this paper, we are proposing a novel approach for relational information retrieving, which does not heavily rely on self-attention operations. And successfully showed, even without extensive relation searching with self-attention, we can obtain a good memory network model which has the comparable performance of the state-of-the-art MANN. Also, to show that our two main contributions are generally applicable to any MANN, we added the experimental results of our another MANN model (RMC-AR, DMRMC-AR) which integrates our modifications, to the Appendix of our manuscript.\\n\\n\\n\\n\\n> \\\"insufficient associating performance\\\" --> It is unclear what this means.\\n\\nWe used the term \\u2018associating performance\\u2019 to represent how much the given memory network represents relational information that exists in input sequences according to a given task. The basic definition of \\u2018associative\\u2019 of associative memory network is \\u201cthe ability to learn and remember the relationship between unrelated items\\u201d. In the same context, we used the term \\u2018associating performance\\u2019 for the ability of the memory network which retrieves the relationship among many items in the input sequences. In other words, how much complex relational information can be encoded to the memory for the following tasks.\"}",
"{\"title\": \"Response to AnonReviewer2 (1/3)\", \"comment\": \"**Thank you for your constructive and valuable comments. We\\u2019ve revised our paper following the suggestions and will explain your concerns in the following.**\\n\\n> In contrast, something like a Transformer pays a heavy cost at read time, as it needs to perform a full self-attention operation (rather than simple lookup) across all stored memories. (. . .) This is especially important given that the complex nature of memory models (i.e., the complexity associated with learning how to read, write, etc.) has recently given way to a more simple approach using memory buffers and self-attention.\\n\\nWe already fully compared our model performance with the most recent Transformer based model (UT [3]) and self-attention-based models (RMC [1], STM [2]) to show the effectiveness of our approach. We added more descriptions for other recent MANN models for your understanding. More specifically, RMC adopted multi-head attention to find a relation between its memory slots and UT, Universal Transformer is a generalized Transformer model for complex relational reasoning tasks. STM is \\u2018self-attentive associative memory\\u2019 which uses full self-attention to retrieve relational information from the input sequence. In our experiments, compared to such Transformer-like models, the proposed DAM-AR showed better or comparable relational reasoning performance. Furthermore, as written in the paper, our main contribution is proposing a novel way of relational information retrieving method which can replace the self-attention based approach in Transformer. We repeatedly mentioned relation finding operations of other models in our paper and it mostly corresponds to the self-attention method.\\n\\n\\n\\n> This is not to say that there is no value in developing memory-based approaches, as there surely is. (. . .) Otherwise, the reader is left wondering how well this model compares to the simpler Transformer-based approach.\\n\\nFirst, recent memory network models we adopted for comparison in our experiments are majorly self-attention based models for relational reasoning. RMC [1] adopts multi-head attention to update the memory contents with its relational information. STM [2], which is an abbreviation of its full name \\u201cSelf-attentive Associative Memory based Two-memory Model\\u201d clearly adopts self-attention for its relation-seeking process based on outer product operation. Furthermore, the model UT [3], Universal Transformer, is an enhanced generalization of Transformer model. Therefore, we think your concerns about \\u2018not comparing with Transformer-like a model or self-attention models\\u2019 can be resolved.\\n\\nSecond, in contrast to the self-attention based models which inevitably introduce quadratic computational overhead for the complex relational reasoning tasks(bAbI, $N^{th}$ farthest), our DAM-AR introduces negligible additional overhead for the same task. Since its basic memory block operation is the same as baseline DNC and the only difference is \\u2018K\\u2019 number of parallel executions of such memory block operations. Furthermore, to obtain these benefits, our model does not need to double or triple the original memory size. Simply dividing its representation length (memory slot length) in two or three subparts was enough modification to achieve similar performance. And DAM-AR\\u2019s relationally rich representations even expedite the training speed of the model as shown in Fig.2 of our experiments.\\n\\nFor your concern, to show the overall strength of our approach, we already compared both approaches for the same complex relational reasoning tasks in our experiments. If we consider the computational and memory overhead that self-attention causes, our DAM-AR approach is quite simple and does not introduce additional overhead over the baseline model. Even with such architectural and computational efficiency, our model achieves superior or comparable relational reasoning performance over the self-attention method.\\n\\n\\n\\n[1] Santoro, Adam, et al. \\\"Relational recurrent neural networks.\\\" Advances in neural information processing systems. 2018.\\n[2] Le, Hung, Truyen Tran, and Svetha Venkatesh. \\\"Self-Attentive Associative Memory.\\\" arXiv preprint arXiv:2002.03519 (2020).\\n[3] Dehghani, Mostafa, et al. \\\"Universal transformers.\\\" arXiv preprint arXiv:1807.03819 (2018).\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"**Thank you for the great feedback. We appreciate your comments for improving the clarity of our manuscript.**\\n\\nWe updated the manuscript accordingly with a further description of DNC components and its related extended equations. Also included which equations are directly related and which component is modified by DAM approach. The main modifications are applied to the interface vector generation part of DNC and attentive gate. \\n\\nMore specifically,\\n\\n\\n(DNC) $ \\\\\\\\xi_t = W_\\\\\\\\xi h_t = [ W_{ \\\\\\\\xi, 1} ] h_t \\\\\\\\in \\\\\\\\mathbb{R}^{L*R+3L+5R+3} $\\n\\n(DAM) $ \\\\xi_t= W_\\\\xi h_t = [ \\\\xi_{t,1}, \\\\cdots, \\\\xi_{t,K}, \\\\hat{g}_t^{at} ] $\\n\\n$ = [ W_{\\\\xi,1}, \\\\cdots, W_{\\\\xi,K}, W_{\\\\xi,at} ] h_t \\\\in \\\\mathbb{R}^{K*(L*R+3L+3R+3)} $\\n\\nDNC generates the memory operators, $\\\\xi_t$, called as interface vector, for its single memory operation, and DAM extends this vector for multiple independent memory blocks. DAM generates $K$ number of DNC like memory operators, $\\\\xi_{t,k}$, (except for temporal linkage operator) and newly introduce attentive gate, $\\\\hat{g}_{t}^{at}$ to read from those multiple memory blocks.\\n\\n\\n(DNC) $M_t=M_{t-1}\\\\circ(E-w_t^w e_t^\\\\top)+w_t^w v_t^\\\\top$\\n\\n(DAM) $M_{t,k} = M_{t-1,k} \\\\circ (E-w_{t,k}^w e_{t,k}^\\\\top) + w_{t,k}^w v_{t,k}^\\\\top$\\n\\nThe writing process of DAM is the same as DNC as shown in the above equations, except the same write operation is executed in multiple memory blocks independently at the same time.\\n\\n\\n(DNC) $ r_t = M_t{w_t^r}^\\\\top $\\n\\n(DAM) $ r_t = \\\\sum_{k=1}^K g_{t,k}^{at} M_{t,k}^\\\\top{w_{t,k}^{r}} $ where $ g_{t,k}^{at} = Softmax(\\\\hat{g}_{t,k}^{at}) $ for $ k=1,\\\\cdots,K. $\\n\\n\\nIn the reading process of DAM, the basic reading procedure for each memory block is the same as DNC, but, DAM integrates every read-out value from $K$ memory blocks into a single distributed representation with an attentive gate. The attentive gate, $\\\\hat{g}_{t,k}^{at}$, is a newly introduced part of DAM for the attentive interpolation.\\n\\nFurthermore, we appreciate recommending helpful reference researches of distributed representation.\"}",
"{\"title\": \"Response to AnonReviewer4\", \"comment\": \"**Thank you for your constructive and valuable comments. We\\u2019ve revised our paper following the suggestions and will explain your concerns in the following.**\\n\\n> I think the scientific statement is quite clear here and the paper is worth accepting; the only shame is that the authors did not apply this approach to a richer task than bAbI.\\n\\nWe appreciate your invaluable feedback. The reason we did not apply our model to larger and complex tasks is, in such tasks, even with improvement, it is hard to clearly show whether the relation reasoning performance of the model contributed to the enhancement of task performance. To show the effect on the relational reasoning performance solely based on memory network, we choose to adopt bAbI task, $N^{th}$ farthest task, similar to other relational memory network researches. However, as you recommended, the experimental result on a larger and more complex task will be included in the final version of the paper.\\n\\n> Also it would have been nice to compare the approach to a multi-head attention transformer since these also use distributed representations (across heads).\\n\\nIn the experiment section of our paper, we show the comparison results of other memory network models (RMC [1], STM [2]) which are adopting transformer like multi-head self-attention or itself is a generalized transformer model (UT [3]). We updated the details of such works in our paper. More specifically, RMC [1] is adopting multi-head attention for the relation searching process. And STM [2] (self-attentive associative memory) also internally extensively applies outer product type self-attention mechanism for relation finding, as its name represents. Furthermore, UT [3], Universal Transformer is an enhanced generalization of Transformer model for complex relational reasoning tasks.\\n\\n> The authors may be interested in the following architecture MERLIN which also uses a reconstruction loss to improve memory representations: https://arxiv.org/abs/1803.10760\\n\\nThank you for recommending interesting research. It would be very helpful for our further research.\\n\\n[1] Santoro, Adam, et al. \\\"Relational recurrent neural networks.\\\" Advances in neural information processing systems. 2018.\\n[2] Le, Hung, Truyen Tran, and Svetha Venkatesh. \\\"Self-Attentive Associative Memory.\\\" arXiv preprint arXiv:2002.03519 (2020).\\n[3] Dehghani, Mostafa, et al. \\\"Universal transformers.\\\" arXiv preprint arXiv:1807.03819 (2018).\"}",
"{\"title\": \"Response to AnonReviewer1 (3/3)\", \"comment\": [\"Q3 (Model's large memory slot size compared to input sequence size on Algorithmic task)\", \"As you pointed out, we reconfigured our model to use a smaller number of memory slots compared to input sequence length and re-evaluated our model performance. And updated the manuscript accordingly (In Appendix G.2). The revised experimental results more clearly showed the benefits of DAM-AR architecture. We appreciate your constructive comments for revising our experimental verifications.\", \"Q4 (Baseline model)\", \"The major contribution of our paper is that complex relational information can be learned through distributed representations of input and ARL, instead of relying on computationally expensive \\u2018attention\\u2019 based approaches. For this purpose, we need a baseline model that does not majorly design to use self-attention or other operations for relational information searching. Although DNC looks like an out-of-date model, it is quite appropriate because it is not designed to extensively searching for relational information from input sequences, and it has a well-performing content-based addressing memory mechanism. If we apply our modifications to DNC and then it shows comparable relational reasoning performance to the state-of-the-art MANN, then it is clear proof that our hypothesis is correct. If we adopt other more recent MANN as a baseline, they all include their own way of using self-attention and relation searching operations. Such a model\\u2019s architectural consideration for relation is overlapping redundancy for our intention of proving the hypothesis, and even with improvement, it is not evident whether our modifications are the source of enhancement.\", \"Furthermore, in our experiments, our model outperformed all other DNC variants and even other types of MANNs (self-attention based models, meta-learning based models). Most models have state-of-the-art performance, but our model showed superior or comparable performance with such models. The purpose of our research is not to pick a very good MANN model and incrementally improve it. Our main goal is proposing a new promising way to retrieving relational information other than self-attention.\", \"Q5 (Loss function for ARL)\", \"We updated to describe adopted loss functions of $L_{ar}$ in Appendix B. The loss function used for ARL is chosen based on the property of the task and input data type. For bAbI task and algorithmic tasks, we adopted cross-entropy loss, and for $N^{th}$ farthest task, $L2$ loss is used.\"]}",
"{\"title\": \"Response to AnonReviewer1 (2/3)\", \"comment\": \"* Q2 (Experiments on Algorithmic task)\\n * The experiments we adopted for our research are two kinds. First, the experiments for the verification of basic memory network functionalities (Copy, Associative Recall), and Second the experiments for the whole model performance on complex relational reasoning tasks (bAbI, $N^{th}$ farthest). The experiments you are mentioning are the basic memory functionality experiments (Copy, Associative Recall) and they are commonly used tasks for the evaluation of MANN. Your concern that \\u2018tasks are too simple\\u2019 can be addressed by the second type of experiments on complex relation reasoning tasks.\\n * The purpose of first kind tasks is verifying the basic functionality of the given MANN. Algorithmic tasks (Copy, Associative Recall) are used to show the simple relation retrieving performance of memory networks and commonly adopted in papers such as NTM [11], SDNC [12], NUTM [13], DNC-MDS [14], etc. And Copy task is also adopted in many researches (NTM [11], DNC [15], NUTM [13], DNC-MDS [14], SDNC [12], STM [16], MNM-p [17], RIM [18], RMC [19]) to show the memorization performance of memory network, which is a crucial function of memory network. Since we integrated two main contributions, distributed memory architecture and ARL, to our baseline model, we need to verify its performance as a memory network model and show how much such modifications can improve each aspect of the memory network. Therefore, as an \\u2018Ablation study\\u2019, we used two tasks to show which aspect of the memory model is enhanced by which modification. After those basic experimental verifications as a memory network, we evaluated our model with complex tasks such as $N^{th}$ farthest task or bAbI task. For further verification, we also updated the manuscript with experimental results on Convex hull task. The experiments you mentioned are intended to show the basic functional performance verification of our model. And experimental results on more advanced tasks are also shown in the experiment section of our paper.\\n\\n[1] Trinh, Trieu H., et al. \\\"Learning longer-term dependencies in rnns with auxiliary losses.\\\" arXiv preprint arXiv:1803.00144 (2018).\\n[2] Caruana, Rich, and Virginia R. De Sa. \\\"Promoting poor features to supervisors: Some inputs work better as outputs.\\\" Advances in Neural Information Processing Systems. 1997.\\n[3] Pham, Trang, Truyen Tran, and Svetha Venkatesh. \\\"Relational dynamic memory networks.\\\" arXiv preprint arXiv:1808.04247 (2018).\\n[4] Ben-David, Shai, and Reba Schuller. \\\"Exploiting task relatedness for multiple task learning.\\\" Learning Theory and Kernel Machines. Springer, Berlin, Heidelberg, 2003. 567-580.\\n[5] Alonso, H\\u00e9ctor Mart\\u00ednez, and Barbara Plank. \\\"When is multitask learning effective? Semantic sequence prediction under varying data conditions.\\\" arXiv preprint arXiv:1612.02251 (2016). \\n[6] Rei, Marek. \\\"Semi-supervised multitask learning for sequence labeling.\\\" arXiv preprint arXiv:1704.07156 (2017).\\n[7] Fodor, Jerry A., and Zenon W. Pylyshyn. \\\"Connectionism and cognitive architecture: A critical analysis.\\\" Cognition 28.1-2 (1988): 3-71.\\n[8] Chalmers, David J. \\\"Syntactic transformations on distributed representations.\\\" Connectionist natural language processing. Springer, Dordrecht, 1992. 46-55.\\n[9] Ferrone, Lorenzo, and Fabio Massimo Zanzotto. \\\"Symbolic, distributed and distributional representations for natural language processing in the era of deep learning: a survey.\\\" arXiv preprint arXiv:1702.00764 (2017).\\n[10] Le, Quoc, and Tomas Mikolov. \\\"Distributed representations of sentences and documents.\\\" International conference on machine learning. 2014.\\n[11] Graves, Alex, Greg Wayne, and Ivo Danihelka. \\\"Neural turing machines.\\\" arXiv preprint arXiv:1410.5401 (2014).\\n[12] Rae, Jack, et al. \\\"Scaling memory-augmented neural networks with sparse reads and writes.\\\" Advances in Neural Information Processing Systems. 2016.\\n[13] Le, Hung, Truyen Tran, and Svetha Venkatesh. \\\"Neural stored-program memory.\\\" arXiv preprint arXiv:1906.08862 (2019).\\n[14] Csord\\u00e1s, R\\u00f3bert, and Juergen Schmidhuber. \\\"Improving differentiable neural computers through memory masking, de-allocation, and link distribution sharpness control.\\\" arXiv preprint arXiv:1904.10278 (2019).\\n[15] Graves, Alex, et al. \\\"Hybrid computing using a neural network with dynamic external memory.\\\" Nature 538.7626 (2016): 471-476.\\n[16] Le, Hung, Truyen Tran, and Svetha Venkatesh. \\\"Self-Attentive Associative Memory.\\\" arXiv preprint arXiv:2002.03519 (2020).\\n[17] Munkhdalai, Tsendsuren, et al. \\\"Metalearned neural memory.\\\" Advances in Neural Information Processing Systems. 2019.\\n[18] Goyal, Anirudh, et al. \\\"Recurrent independent mechanisms.\\\" arXiv preprint arXiv:1909.10893 (2019).\\n[19] Santoro, Adam, et al. \\\"Relational recurrent neural networks.\\\" Advances in neural information processing systems. 2018\"}",
"{\"title\": \"Response to AnonReviewer1 (1/3)\", \"comment\": [\"**Thank you for your constructive and valuable comments. We\\u2019ve revised our paper following the suggestions and will explain your concerns in the following.**\", \"Q1 (Contribution of our proposed methods)\", \"Our work is not a mere extension of DNC model. Our main contribution is showing that the \\u2018distributed representation\\u2019 concept [7, 8, 9, 10] can be applied to a content-based addressing memory model to obtain a more rich representation for relational reasoning from a given input sequence. In other words, we are proposing a novel way of retrieving relational information which can even replace the self-attention mechanism for MANN. Nowadays, most MANN models are focusing on using the attention or self-attention method to find relational information from the input sequence. However, we adopt a simple and unprecedented way of addressing the same problem. Without relying on a highly computational self-attention method, we still can obtain similar relational reasoning performance, even with DNC, and our parallel architecture and auxiliary task loss introduce negligible additional computational overhead compared to the attention mechanism. Furthermore, all these contributions are generally applicable to any MANN. As supporting evidence, we added the experimental result of another modified MANN model (RMC-AR, DMRMC-AR) that adopted our contributions to the Appendix of our paper. Therefore, our work is not just a simple incremental work of the usual MANN model, it provides a novel and efficient way for relational information finding method and also has generality as a policy which can be applied to any MANN model.\", \"Moreover, the idea introduced in [3], which you mentioned, is clearly different from our work. [3] is mainly purposed to store and retrieve structured data from memory. The structured data, such as graphs, already have a pre-defined relationship among its entities and there is no need to searching for the relational information between entities. The main point of [3] is how to store and retrieve such graph structure in the memory and use such structure for inference, not constructing the graph. Furthermore, the contents of multiple memory slots are not different representations of the same input data. They are disjoint parts of the input graph and it is nothing to do with distributed representation or distributed memory. Compared to [3], our DAM makes no assumption about the input data structure other than the sequential arrival. And, by itself, learns the relationship between input items and encodes it into the distributed representation in the memory.\", \"For the clarification of association performance of AR loss, it can be explained by multi-task learning theory. In a multi-task learning setting, well designed auxiliary task can allow the model to learn beneficial representations for the main task [2, 4, 5, 6]. In our case, ARL not only helps the main task of the model to learn better representation by reinforcing the relationships that exist in input sequence according to a given task but also enhances the memorization performance of the memory network by keeping refreshing input contents from the memory. The \\u2018association reinforcing\\u2019 effect comes from both ARL task\\u2019s rememorizing of representations in memory and its multi-task learning setting with the main task. In ARL\\u2019s own task, each item of the input sequence is sampled by Bernoulli trail of probability \\u2018p\\u2019. And whenever an input item is sampled, its loss is computed with the prediction output of the model at that time step. In this way, ARL is summed up and added to the target objective with a scaling factor. When the length of the input sequence is \\u2018n\\u2019, then the sequence of the Bernoulli trial becomes the binomial distribution with the probability of \\u2018p\\u2019 and the total number of trials \\u2018n\\u2019. Therefore, on average, the expected value of ARL sampling becomes \\u2018np\\u2019. This means \\u2018np\\u2019 portion of the input sequence is stochastically sampled and reconstructed by the model. This stochastic sampling method enables the mode to refresh \\u2018np\\u2019 amount of input sequence from the memory. Therefore, the reconstruction process enhances the overall memorization performance of the underlying memory network model. Similar approach with different purposed work in [1] provides clear evidence of ARL\\u2019s effect.\"]}",
"{\"title\": \"Response to AnonReviewer5 (3/3)\", \"comment\": [\"Q3 (Scalability of DAM)\", \"The experiment you are mentioning is the scalability test for DAM. It is designed to show the performance difference between one large memory and several smaller memory blocks. However, the experimental setting you are suggesting is not correctly applicable, because, for each task, there is a minimum memory length for correct information storing in content-based addressing. During bAbI task, the reason we did not iteratively divide the fixed total memory size to have the collection of smaller memories is, it has a minimum representation length required for a sub memory block which does not cause information loss in bAbI task. In the external memory matrix augmented content-based addressing scheme, if the length of a memory slot becomes too small for a given task, the content of input cannot be encoded into storage correctly and the performance of content-based addressing memory severely degrades.\", \"We designed the scalability experiment to answer the following two questions, 1) \\u201cWhich memory configuration is better for relational reasoning? \\u2018using a single large memory\\u2019 or \\u2018several numbers of smaller memory blocks\\u2019 when total memory size is same\\u201d and 2) \\u201cCan we get more performance enhancement if we use more memory blocks?\\u201d. To show the effect of distributed memory architecture compared to a single memory system, we designed the scalability experiment. The memory models we named DAM-2, 3, and 4 simply represent the number of sub memory blocks they used. In other words, the scalability experiment is showing the effect of varying the hyperparameter, \\u201cthe number of sub memory blocks\\u201d, on relation reasoning task. It reveals the simply increasing the size of a single memory does not help the performance of the memory network. In Fig. 3 of the paper, the dotted line of orange color represents DNC model with the same total memory size for each of DAM-2, 3, and 4. As shown in Fig. 3, when the total memory size is the same, using several smaller sub-memory blocks is more helpful.\", \"However, as far as we understood your concern, we update our paper with a similar scalability experiment on Association Recall task which needs a smaller minimum memory block length and iteratively divides fixed size memory length until reaches the minimum length. The experimental result shows a similar performance pattern with the scalability test on bAbI. If we use more number of sub-memory blocks, the accuracy increases compared to the same size single large memory. If we fixed the total memory size and iteratively divide it into the smaller sub-blocks to construct DAM-2 and 3 before information loss occurs, then the results are also similar. Before the information loss occurs, the more blocks provide more performance.\", \"If we misunderstood your point or our answer does not fully address your concerns, please let us know. We are gladly responding with further explanation.\", \"Q4 (Relevant model)\", \"Thank you for recommending an interesting paper. The paper you mentioned is a generative memory model which memorizes the distribution rather than an encoded representation of input data. It is a Bayesian memory and combining several smaller same memory networks with product factorization. The point that adopting multiple memories for the entire model is similar to ours, however, their work mainly focuses on memorizing good input distribution based on the Bayesian rule, and input data should be exchangeable episodes which means an order of input data does not matter. Therefore, it is hard to be applied to the sequential input data where ordering is important and has relational meaning. QnA task from NLP is one of such examples. In contrast, our model can effectively model relational information exist in such sequential input data according to the target task. Therefore, our model is more appropriate for NLP problems where input data is provided in order and it includes relational information. Furthermore, our model is more adaptive for the target task because our architecture can encode target task-related information while learning distributed representation of input data.\"]}",
"{\"title\": \"Response to AnonReviewer5 (2/3)\", \"comment\": [\"Q2 (Intuition about DAM architecture)\", \"The basic intuition underlying DAM architecture can be explained in two ways. First, it is not a simple division of single large memory space, rather, it is a collection of separate independent memory spaces. In other words, each memory block has its own complete representation space. Therefore, the same input can be stored in many different encoded versions, similar to the multi-head attention in Transformer. Each version of input data can represent one of the diverse contexts or relations it represents in the sequence, the same as multi-head attention. And DAM learns to combine such diverse representation to a single rich one which can include more complex relational information, such as \\u2018multi-hop\\u2019 [5], in itself.\", \"The second perspective is the \\u2018distributed representation\\u2019 concept for current input data. The distributed representation is a well-known concept which frequently adopted in NLP research literature. It is also conceptually adopted in a word embedding \\u2018Word2Vec\\u2019 or \\u2018Paragraph Vector\\u2019 in PV-DM [6]. As in such researches, a richer representation for a given task can be constructed by merging several other representations of input data. In other words, we can infuse task-related information with such a vector encoding procedure. In our case, several different representations of the same input data are distributed across multiple memory blocks and they are merged with attentional interpolation to produce \\u2018distributed representation\\u2019 of the input data.\", \"Your concern that \\u2018wide representation (wide memory length) might play a similar role as multiple representations\\u2019 can be clearly explained by the scalability experiment of DAM. In the scalability experiment, we increase the length of a single representation in DNC (DAM-1) and compare its performance with the other cases which are adopting multiple smaller size of representations (DAM-2, 3, and 4). As shown in Fig.3 of our paper, for the same complex relational reasoning task (bAbI), the performance of the wide representation model (DAM-1) degrades as the size of the representation increases. In contrast, the multiple representation models (DAM-2, 3, and 4) show enhanced performance with an additional number of sub memory blocks. The intuition behind this improvement can be explained with a similar concept from the multi-head attention mechanism. In the content-based addressing memory, even though we increase the length of the representation, the representation subspace for encoding input data remains the same as before. Therefore, representation diversity does not improve at all. However, if we use multiple separate sub memory blocks, each independent memory block can have its own representation subspace. Such diverse representations of the same input data can provide more chance to the model to learn about sophisticated relational information that exists in the input sequence, such as \\u2018multi-hop\\u2019 [5] relations. For better understanding, we updated the manuscript with the visualized map of the attentive gate to show how the information is distributed across multiple memories.\", \"[5] Sukhbaatar, Sainbayar, Jason Weston, and Rob Fergus. \\\"End-to-end memory networks.\\\" Advances in neural information processing systems. 2015.\", \"[6] Le, Quoc, and Tomas Mikolov. \\\"Distributed representations of sentences and documents.\\\" International conference on machine learning. 2014.\"]}",
"{\"title\": \"Response to AnonReviewer5 (1/3)\", \"comment\": \"**Thank you for the constructive comments. We appreciate the comments for improving the clarity of statements and experimental verifications. The manuscript is revised accordingly, and the responses to your main concerns are listed below.**\\n\\n* Q1 (Clarity of ARL)\\n * ARL is an auxiliary task which reconstructs sampled input sequence based on memory contents in a multi-task learning setting. Its reconstruction task enhances association performance (relation finding) of the main task based on the multi-task learning scheme. The goal of an auxiliary task in MTL theory is to enable the model to learn representations that are shared or helpful for the main task [1, 2, 3, 4]. Similarly, ARL does this implicitly: it allows the model to learn beneficial representations for the main task and, at the same time, enhance memorization performance of a memory network.\\n * We added more descriptions of ARL to the manuscript for clarification. The formal definition of $L_{ar}$ is based on the error measure for estimating the difference between \\u2018reconstructed input\\u2019 and \\u2018original input\\u2019. And it is defined according to the task property of the input data type. We described it in the Appendix. For bAbI task, we used cross-entropy loss and for $N^{th}$ farthest task adopted $L2$ loss.\\n * In ARL sampling details, each input data item of the sequence is decided to be sampled by the Bernoulli trial of probability \\u2018p\\u2019. Therefore, when an input sequence is provided to the model, for each data item, the model decides whether it should sample an item or not. And whenever the input item is sampled, its loss is computed with the prediction output of the model at that time step. In this way, ARL is summed up and added to the target objective with a scaling factor. When the length of the input sequence is \\u2018n\\u2019, then the sequence of the Bernoulli trial becomes the binomial distribution with the probability of \\u2018p\\u2019 and the number of trials \\u2018n\\u2019. Therefore, on average, the expected value of ARL sampling becomes \\u2018np\\u2019. This means \\u2018np\\u2019 portion of the input sequence is stochastically sampled and reconstructed by the model. This stochastic sampling method enables the model to refresh \\u2018np\\u2019 amount of input sequence from the memory. This whole reconstruction task is trained with the main task in Multi-task Learning (MTL) setting.\\n\\n[1] Caruana, Rich, and Virginia R. De Sa. \\\"Promoting poor features to supervisors: Some inputs work better as outputs.\\\" Advances in Neural Information Processing Systems. 1997.\\n[2] Ben-David, Shai, and Reba Schuller. \\\"Exploiting task relatedness for multiple task learning.\\\" Learning Theory and Kernel Machines. Springer, Berlin, Heidelberg, 2003. 567-580.\\n[3] Alonso, H\\u00e9ctor Mart\\u00ednez, and Barbara Plank. \\\"When is multitask learning effective? Semantic sequence prediction under varying data conditions.\\\" arXiv preprint arXiv:1612.02251 (2016). \\n[4] Rei, Marek. \\\"Semi-supervised multitask learning for sequence labeling.\\\" arXiv preprint arXiv:1704.07156 (2017).\"}",
"{\"title\": \"Review\", \"review\": \"The paper introduces a modification to Differentiable Neural Computer called Distributed Associative Memory (DAM) that comprises of 1) multiple independent memory blocks and 2) association reinforcement loss (ARS). Experimentally DAM improves upon DNC on multiple tasks and is showing comparable performance to some relation-aware architectures.\\n\\n**Paper strengths**\\n* It is interesting to see that a relatively simple modification can bring a prominent performance boost. While it probably still requires further investigation, the fact that an architecture with factorized memory blocks can match performance of an explicitly relational architecture suggests that this is indeed a step in the right direction and/or the relational benchmarks currently used in the community are too simple.\\n\\n**Paper weaknesses**\\n* Clarity. ARL which is one of the two novel components of DAM is not described clearly. $l_{ar}$ is not formally defined anywhere and its textual description is rather vague. What exactly does \\\"sampled input sequence\\\" mean? Should it be called \\\"subsampled\\\" instead? Is it always valid to simply subsample individual input tokens?\\n* Since the improved performance presumably comes from the multiple individual memory blocks, it is important to understand how exactly information is factorized across them and how each of the blocks is used. One can argue that a wide enough representation can potentially learn the factorization scheme and mimic it using multiple reading/writing heads. To me the basic intuition behind that is provided in the Introduction is not enough.\\n* I appreciate that authors do compare memory capacity across different DNC variants, but then it is important to do so for all the baselines and ideally evaluate all the baselines with the same number of floating point numbers reserved for memory. Otherwise, the exact source of the improvements is not clear.\\n* Authors may want discuss the following paper [1] which describes a highly relevant model.\\n\\nI am happy to revise my score if the points above are addressed by authors.\\n\\n** References **\\n[1] Marblestone, Adam, Yan Wu, and Greg Wayne. \\\"Product Kanerva Machines: Factorized Bayesian Memory.\\\" arXiv preprint arXiv:2002.02385 (2020).\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Multiple memories and reconstruction loss for MANNs\", \"review\": \"The paper proposes two extensions for existing memory networks (e.g., DNC) to improve associative reasoning: (a) multiple memory blocks instead of just one; and (b) self-supervised training as auxiliary tasks. Experiments conducted on toy datasets show that these extensions lead to improved performance over DNC and the method is somewhat competitive against newer variants on several synthetic tasks.\", \"pros\": [\"Understanding why MANN works and doesn't, and how to improve it are still open problems. And thus this work is welcomed.\", \"The empirical results are positive, suggesting the introduced ideas have merits.\"], \"cons\": [\"In terms of novelty, this work is quite incremental. The first extension is simply about dividing the external memory in DNC into multiple blocks that connect via an attentive gate. The argument of having multiple memory blocks is to diversify the representation of the same input. A similar idea has been introduced in [3], although the motivation was different. The second extension is a reconstruction loss on the input signals. To what degree it works as \\u201cassociation reinforcing\\u201d as claimed is unclear, even though the loss would work as a regulariser as in standard hybrid loss in existing neural networks.\", \"In terms of experiments, the tasks are too simple and particularly favor the model design in the paper. For example, in the copy task, we need faithful information from the input signals for correctly decoding (copying), which can be enhanced via the reconstruction loss during the encoding phase. The same thing applies to the associative recall task.\", \"Note that for both copy and associative recall, the length of the input sequence ([8, 32]) is much smaller than the number of memory slots (64) (see Appdx A.2.1). With such redundancy in storage, it is not very surprising to me that dividing the memory into smaller blocks can improve convergence.\", \"The main baseline used for comparison in this paper is DNC, which, in my opinion, is out-of-date as it can be outperformed easily (e.g., existing works on memory networks [1, 2] show significant improvements over DNC on these toy tasks).\"], \"some_minor_comments\": \"- L_ar(i_t, y_t) is not defined in the paper. Do the authors use L1, L2 or binary cross-entropy loss?\\n\\n[1] Relational recurrent neural networks, Santoro et. al., NIPS-2018\\n[2] Improving Differentiable Neural Computers through Memory Masking, De-allocation, and Link Distribution Sharpness Control, Csordas et. al., ICLR-2019\\n[3] Pham, Trang, Truyen Tran, and Svetha Venkatesh. \\\"Relational dynamic memory networks.\\\" arXiv preprint arXiv:1808.04247 (2018).\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Convincing demonstration that distributing memory improves learning\", \"review\": \"The authors propose a distributed memory architecture which shares some interface with the Differentiable Neural Computer however crucially segments memory into a collection of K units. The authors show that by increasing K the model learns to use its memory for algorithmic tasks such as copying and associative recall and learn faster. The authors also propose an auxiliary loss to improve memory representations, which involves reconstructing inputs from the representations in memory.\\n\\nI think the scientific statement is quite clear here and the paper is worth accepting; the only shame is that the authors did not apply this approach to a richer task than bAbI. \\n\\nAlso it would have been nice to compare the approach to a multi-head attention transformer since these also use distributed representations (across heads).\", \"the_authors_may_be_interested_in_the_following_architecture_merlin_which_also_uses_a_reconstruction_loss_to_improve_memory_representations\": \"https://arxiv.org/abs/1803.10760\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Official Blind Review\", \"review\": \"This paper proposes an extension of the Differenciable Neural Computer networks (DNC)\\nIn these DNCs, the reading operation on the external memory are done by accessing a single memory block, which represents a single piece of information or knowledge. The architecture proposed in this paper aims instead to give the possibility to access multiple memory blocks at the same time. In this way, the approach of reading memory is more holistic (Chalmers, 1992). This is a desirable feature of distributed representations and, then, distributed memories. Otherwise, these memories are just similar to the classical approach of representing symbols (Fodor and Pylyshyn ,1988). This debate of what is the main charateristic of distributed representations is revitalized in Ferrone and Zanzotto (2020). It is then a needed extension of DNC.\\nThe paper is well written and results are convincing.\\nHowever, there is a minor problem. There is not a direct link among equations in Section 2 and equations in Section 3. Clearly, DNC equations are extended by equations in Section 3. Are these equations linked only with the M, that is the Memory? \\n\\nReferences\\n\\nFodor, J. A., and Pylyshyn, Z. W. (1988). Connectionism and cognitive architecture: a critical analysis. Cognition 28, 3\\u201371.\\nChalmers, D. J. (1992). Syntactic Transformations on Distributed Representations. Dordrecht: Springer.\\nFerrone, Zanzotto (2020), Symbolic, Distributed, and Distributional Representations for Natural Language Processing in the Era of Deep Learning: A Survey\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"New memory architecture, but unfortunately misses an obvious comparison to self-attention models\", \"review\": \"In this work the authors propose a novel memory architecture wherein memories are stored in multiple ways across a series of memory blocks. By \\\"distributing\\\" the memories in such a manner, the model can flexibly retrieve one version of a memory or another, which enables more flexible computations when conditioning on that memory. The authors demonstrate that such a memory network does well in tasks involving relational reasoning.\", \"one_advantage_of_memory_based_models_is_that_they_amortize_compute_across_time\": \"they pay an upfront cost to shape and store a memory *online*, but gain an advantages at read time, where they just need to do a basic lookup among the stored memories. A consequence of these memory-based approaches is that the models need to anticipate how they should store a memory given what might come in the future. In this work, the authors propose the strategy of storing a memory in multiple different ways, which mitigates the risk that a single stored memory will be insufficient for what might come.\\n\\nIn contrast, something like a Transformer pays a heavy cost at read time, as it needs to perform a full self-attention operation (rather than simple lookup) across all stored memories. Transformer-based approaches, however, profit immensely from tasks where memories need to be shaped differently at read time, as they can use the full power of self-attention to morph and condition memories given all the information they've accumulate to that point in time. Given the recent surge of evidence of the usefulness of self-attention based models, and the fact that they can easily be interpreted and/or used as memory models, the authors would be remiss to not include a self-attention based baseline to which they can compare their model. This is especially important given that the complex nature of memory models (i.e., the complexity associated with learning how to read, write, etc.) has recently given way to a more simple approach using memory buffers and self-attention.\\n\\nThis is not to say that there is no value in developing memory-based approaches, as there surely is. However, the memory-based approaches should demonstrate their value in domains that play to their strengths. As stated above, these models amortize the cost of shaping memories over time, and in addition, they can keep a constant size memory indefinitely in time. In contrast, a Transformer-based memory model would grow in memory cost as time increases, and the compute at read time would similarly grow quadratically with time. Thus, to demonstrate the value of a memory-based approach over self-attention, it is wise to pit the two against one another in a regime where self-attention simply becomes too costly; in other words, in a regime where a great number of time steps (and hence memories) need to be considered. Otherwise, the reader is left wondering how well this model compares to the simpler Transformer-based approach.\\n\\nThe authors also propose a new loss that forces the memory contents to be able to predict a sample sequence of previously observed inputs. Unfortunately, I believe the inclusion of this loss makes the absense of a Transformer baseline even more troublesome. This is because, for this loss to be implemented, we need to keep around a buffer of previous inputs, which is precisely the memory cost associated with using a Transformer! So, given the previous discussion on how memory models can in theory maintain a constant sized memory, in the DAM this is no longer the case. Memory costs grow linearly with time because of the need to preserve inputs for use in the ARL loss. \\n\\nAltogether, the paper is well put together and written well enough to understand the ideas and experiments. The authors did well to choose experiments that would demonstrate the strengths of their approach. Unfortunately, the empirical and rational comparison to Transformer-based approaches prevents me from recommending its publication. \\n\\nThere are a few minor points scattered throughout, but I'll just call attention to the following:\\n\\n\\\"insufficient associating performance\\\"\\n--> It is unclear what this means.\\n\\n\\\"lossy representation\\\"\\n--> This term is used throughout, but I'm not sure it's warranted. How do we know that the distributed vector representation is truly lossy with respect to the information it must encode? In princple nothing prevents it from being lossless. Given a complicated enough decoder, one wouldn just need a handful of bits to encode very complicated things losslessly. \\n\\nHowever, even with its promising performance on a wide range of tasks, MANN still has difficulties in solving complex relational reasoning problems (Weston et al., 2015).\\n--> There has been much work since 2015 that has improved MANN performance on these tasks. For example, the Sparse DNC, as eventually shown in the results section. \\n\\nThrough this attention-based reading process, DAM retrieves the most suitable information for the current task from distributed representations existing in the multiple memory blocks\\n--> As shown in this text, and as used throughout, the term \\\"distributed representation\\\" is overloaded in this work. Traditionally the term \\\"distributed representation\\\" is used to denote a vector with real-valued elements, whereas here it is used to denote a set of such vectors, \\\"distributed\\\" across multiple memories.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
DGttsPh502x | Unsupervised Discovery of Interpretable Latent Manipulations in Language VAEs | [
"Max Ryabinin",
"Artem Babenko",
"Elena Voita"
] | Language generation models are attracting more and more attention due to their constantly increasing quality and remarkable generation results. State-of-the-art NLG models like BART/T5/GPT-3 do not have latent spaces, therefore there is no natural way to perform controlled generation. In contrast, less popular models with explicit latent spaces have the innate ability to manipulate text attributes by moving along latent directions. For images, properties of latent spaces are well-studied: there exist interpretable directions (e.g. zooming, aging, background removal) and they can even be found without supervision. This success is expected: latent space image models, especially GANs, achieve state-of-the-art generation results and hence have been the focus of the research community. For language, this is not the case: text GANs are hard to train because of non-differentiable discrete data generation, and language VAEs suffer from posterior collapse and fill the latent space poorly. This makes finding interpetable text controls challenging. In this work, we make the first step towards unsupervised discovery of interpretable directions in language latent spaces. For this, we turn to methods shown to work in the image domain. Surprisingly, we find that running PCA on VAE representations of training data consistently outperforms shifts along the coordinate and random directions. This approach is simple, data-adaptive, does not require training and discovers meaningful directions, e.g. sentence length, subject age, and verb tense. Our work lays foundations for two important areas: first, it allows to compare models in terms of latent space interpretability, and second, it provides a baseline for unsupervised latent controls discovery. | [
"interpretability",
"unsupervised interpretable directions",
"controllable text generation"
] | Reject | https://openreview.net/pdf?id=DGttsPh502x | https://openreview.net/forum?id=DGttsPh502x | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"nEseTYD8ZJj",
"rx7V4ECBVmZ",
"Bl0s-kVQQeL",
"sLfWkBPDA3C",
"S5uNTnEtojx",
"MWu4hOJKhcx",
"hmHHN99TBv",
"YA_fvk0vmP0",
"ZF1DUm4mdXI"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040351883,
1606305255316,
1606304165515,
1606302714266,
1606302084708,
1604333411160,
1604101324275,
1603831810932,
1602875876540
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3438/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3438/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3438/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3438/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3438/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3438/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3438/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3438/AnonReviewer4"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This paper proposes a simple method to discover latent manipulations in trained text VAEs. Compared to random and coordinate directions, the authors found that by performing PCA on the latent code to find directions that maximize variance, more interpretable text manipulations can be achieved.\\n\\nThis paper receives 4 reject recommendations with an average score of 3.75. The reviewers have raised many concerns regarding the paper. (i) The idea is straightforward with limited novelty. (ii) There are only mostly qualitative results presented. More in-depth analysis and more solid evaluations are needed. (iii) Human evaluation is too small to draw any reliable conclusion. (iv) The proposed method is only tested on one text VAE, how well it can be generalized to other models remains unclear.\\n\\nThe rebuttal unfortunately did not address the reviewers' main concerns. Therefore, the AC regrets that the paper cannot be recommended for acceptance at this time. The authors are encouraged to consider the reviewers' comments when revising the paper for submission elsewhere.\"}",
"{\"title\": \"Response to Reviewer 4\", \"comment\": \"Thank you for a detailed review! We address your concerns below:\", \"weaknesses\": \"1. Thank you for this suggestion! We will look into the evaluation of style transfer models and add more quantitative results in the next revision.\\n2. To our knowledge, having few annotators for evaluation of text attribute manipulation (style transfer in particular) is quite common. For example, in [1] and [2] there are 6 and 10 annotators respectively.\\n3. The generation artifacts are caused not only by the latent shifts, but also by the model itself. We will evaluate this in more detail in the next revision of the paper.\\n4. We also evaluated the method on two models trained from scratch on smaller datasets, namely we trained CP-VAE [3] and the model from [4] on Yelp and Amazon datasets respectively. While some directions are also identifiable (e.g., sentence length and sentiment in case of CP-VAE), the overall generation quality is significantly lower. This decline in fluency is expected: the highest-quality generative models for language have millions of parameters and are trained on massive datasets. To our knowledge, OPTIMUS is the only high-capacity language VAE trained on a large dataset with openly available weights, which is why we mostly evaluate our method on this model.\", \"questions\": \"1. These are examples of directions obtained with the SNLI model that were manually chosen from a set of generated sentences to highlight the differences between word categories.\\n2. Thank you for the correction! We replaced this motivation with a description of the fully factorized prior distribution of standard Gaussian prior VAEs.\\n\\n[1] Disentangled Representation Learning for Non-Parallel Text Style Transfer. Vineet John, Lili Mou, Hareesh Bahuleyan, Olga Vechtomova. ACL 2019 \\n\\n[2] Improving Disentangled Text Representation Learning with Information-Theoretic Guidance. Pengyu Cheng, Martin Renqiang Min, Dinghan Shen, Christopher Malon, Yizhe Zhang, Yitong Li, Lawrence Carin. ACL 2020 \\n\\n[3] On Variational Learning of Controllable Representations for Text without Supervision. Peng Xu, Jackie Chi Kit Cheung, Yanshuai Cao. ICML 2020 \\n\\n[4] Controllable Unsupervised Text Attribute Transfer via Editing Entangled Latent Representation. Ke Wang, Hang Hua, Xiaojun Wan. NeurIPS 2019.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"Thank you for a thorough evaluation of our paper and constructive feedback! We address your concerns below:\", \"cons\": \"1. Regarding the novelty: to the best of our knowledge, previous works on unsupervised discovery have not attempted to reveal interpretable latent directions in generative models for language. We believe this is a challenging yet important task that will bring forth more applications as the field develops, similarly to what we observe in image GANs. In addition, our method exploits the availability of the encoder network in VAEs and works directly in the model's latent space. Previous works were applied only to image GANs and required sampling from the latent distribution (which is not necessary for encoder-decoder models) or backpropagation through generated samples (which is not possible with discrete outputs).\\n2. Regarding the baselines: we agree that they are not as strong as one would prefer. However, the choice of baselines here is restricted: the task of unsupervised latent discovery in text generation models has not yet been approached, so the field itself is not quite established. If you have any suggestions on additional methods that would fit the setting of the paper, we would be happy to evaluate them.\\n3. Regarding the modification procedure description: we have updated the text to highlight that each interpretable direction is a vector in the latent space. As a result, applying the shift corresponds to adding this vector to the encoder output.\", \"questions\": \"1. We believe you are referring to Figure 1: it was meant to give an intuitive explanation of our method; both dimensions correspond to coordinates in an example two-dimensional latent space. The actual method works with 768-dimensional representations of sentences, which are much harder to visualize. \\n2. As suggested by Reviewers 3 and 4, it is possible to measure the fluency of generated outputs (in terms of perplexity) and attribute change quality (in terms of heuristic metrics when we can express the manipulation in simple words).\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"Thank you for your review! Please allow us to address your concerns below:\\n\\n1. In our preliminary experiments, we trained CP-VAE [1] and the model from [2] on Yelp and Amazon datasets respectively. While some directions are also identifiable (e.g., sentence length and sentiment in case of CP-VAE), the overall generation quality is significantly lower. This decline in fluency is expected: the highest-quality generative models for language have millions of parameters and are trained on massive datasets. To our knowledge, OPTIMUS is the only high-capacity language VAE trained on a large dataset with openly available weights, which is why we mostly evaluate our method on this model. \\n\\n2. Regarding the applicability of the method to AE models instead of just VAEs. Indeed, one can apply the technique to AEs as well; in fact, one of the models we evaluated was a regular autoencoder (SNLI, $\\\\beta=0$ in Table 1). However, a regular autoencoder is not a proper generative model because its sampling process is not well-defined. Hence, we focus only on variational autoencoders in our work. \\n\\n[1] On Variational Learning of Controllable Representations for Text without Supervision. Peng Xu, Jackie Chi Kit Cheung, Yanshuai Cao. ICML 2020 \\n\\n[2] Controllable Unsupervised Text Attribute Transfer via Editing Entangled Latent Representation. Ke Wang, Hang Hua, Xiaojun Wan. NeurIPS 2019.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thank you for your review! We address your points below.\\n\\n1. For a list of directions, we have added several examples of latent manipulations that were discovered by our method on the SNLI model ($\\\\beta=1$) and separated them into several categories to help the reader understand the differences between groups. These are available in the new section of the appendix.\\n\\n2. Regarding the sentences with non-applicable directions: such directions still change the sentences, although the content changes are not as drastic. The degree of content change depends on the magnitude of a shift and is a property of the model: if a particular VAE has latent shifts that can be used only for a subset of texts, any method can reveal them.\\n\\n3. On human and automatic evaluation: first, although we use 20 sentences for manipulation, we apply 20 manipulations to each sentence, which gives us 400 initial examples. We reuse these 20 sentences across different manipulations to observe how different latent shifts affect the same sentence. After filtering unchanged sentences, we show 5 examples of each manipulation, which corresponds to 100 transformation examples; for each shift, we have 5 degrees of intensity.\\nSecond, we agree that testing the interpretability of each manipulation with automatic metrics would strengthen the results. We will measure the fluency of generated sentences and the success of simple transformations in the next revision of the paper.\"}",
"{\"title\": \"Straightforward idea, hasty experiments\", \"review\": \"This paper studies latent manipulations in text autoencoders. The authors propose that compared to random and coordinate directions, moving in the PCA directions of encodings of training examples will produce more interpretable text manipulations.\\n\\nAs the idea is straightforward, I'd like to see more in-depth analysis and more solid evaluations. The authors characterize the effects of PCA directions into four types (length, word change, word insertion, and structure enforcement), but for each type only one example is provided. What are the changed/inserted words and what are the enforced structures? Can you give a comprehensive list of them? When are these latent directions applicable and when are they not? For sentences that are not applicable, what effects will they bring?\\n\\nThe only evaluation in the paper is human evaluation of whether a latent direction shift produces interpretable generations. It's conducted on 20 sentences, which is too small to draw any conclusions. The results on the Wikipedia dataset are very poor. You may test the success rate of manipulations in a specific direction (such as word insertion) through automatic evaluation. This can also reveal which manipulations are easier to implement and which are more difficult.\\n\\nI think with these changes, the paper will be more substantial, instead of spending 4 out of 8 pages on the background like in the current submission. Also, it's more suitable for NLP conferences than ICLR.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"An interesting examination and exploration of the OPTIMUS VAE model\", \"review\": \"The author propose to use PCA-like method on latent space of VAE models to unsupervisedly detect interpretable direction. The idea is reasonable and practically useful for large-scale pretrained VAE model, i.e. OPTIMUS. This paper has a clear idea and a thorough discussion with related works.\\n\\nI have some concerns about the model. The proposed model seems requiring a large-scale pretrained model. If the VAE model is just trained on SNIL level, is method still valid? From the PCA side, it does not require a Gaussian space. So why specifically targeting on VAE model, not just AE model is another confusion. Since the direction is computed based on training data, I kind of feeling of no need of using VAE model.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Unsupervised Discovery of Interpretable Latent Manipulations in Language VAEs\", \"review\": \"This paper presents a PCA-based latent variable language model for unsupervised latent variable interpretation.\", \"pros\": \"1. The authors propose to use PCA to extract the principal components of the results and claim them to be interpretable latent variables.\", \"cons\": \"1. The novelty is quite limited. Applying an existing well-known technique to obtain interpretable latent variables is not advancing this domain in the right direction.\\n2. The explanation of latent variable in this paper is self-justified. The self-defined baselines cannot be convincingly conveyed that latent variable are interpreted. And the baselines are quite weak.\\n3. In the quality evaluation, the authors do not show how clearly to modify the discovered latent variable to alter the sentences.\", \"question\": \"1. How do you encode a sentence in a two-dimensional space? Are both dimension probability?\\n2. Other than the current quantitative and qualitative analysis, do you think any other quantitative evaluation will be helpful?\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Simple method but unclear results\", \"review\": \"-------------------\\nSummary\\n-------------------\\nThis paper proposes a simple approach to discover interpretable latent manipulations in trained text VAEs. The method essentially involves performing PCA on the latent representations to find directions that maximize variance. The authors argue that this results in more interpretable directions. The method is applied on top of a VAE model (OPTIMUS), and the authors argue that different directions discovered by PCA correspond to interpretable concepts.\\n\\n-------------------\\nStrengths\\n-------------------\\n- The method is simple, and can be applied on top of existing text VAEs.\\n- Learning interpretable and controllable generative models of text is an important research area, and this paper contributes to this important field.\\n\\n-------------------\\nWeaknesses\\n-------------------\\n- There are only mostly qualitative results presented. While I agree that performing quantitative results is difficult with this style of work, the authors could have (for example) adopted methods from the style transfer literature to show quantitative results. These metrics include perplexity (to see how fluent the generations are), reverse perplexity, and style transfer accuracy (this may not be applicable since there is no ground truth \\\"style\\\" in this work, but the ground truth style could be heuristically defined for some transformations, e.g. for singular/plural transformations).\\n- Human evaluation seems nonideal since it is only tested on 12 people.\\n- The generations are actually not so good in my opinion? E.g. many of the generations in the appendix are ungrammatical and/or semantically nonsensical. Again, metrics such as perplexity could quantify the fluency of generated text.\\n- The method is only applied to one text VAE mode which specifically uses BERT/GPT-2 , so it is not clear if this will generalize to other models (e.g. models trained from scratch).\\n\\n-------------------\\nQuestions/Comments\\n-------------------\\n- In Figure 2, are these the top 4 principal directions? If not, how were these directions discovered?\\n- \\\"It is known that variational autoencoders trained with a schedule for the KL weight parameter (equation 1) obtain disentangled representations (Higgins et al., 2016; Sikka et al., 2019; John et al., 2019). Since OPTIMUS is also trained with KL annealing, canonical coordinates in its latent space are likely to be disentangled.\\\" I believe this is only valid for beta > 1 so it is not really applicable here.\\n-----------------------\", \"edit_after_rebuttal\": \"Thank you for the rebuttal and clarifying some of my questions. I have decided to keep the original score.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
snaT4xewUfX | Variational inference for diffusion modulated Cox processes | [
"Prateek Jaiswal",
"Harsha Honnappa",
"Vinayak Rao"
] | This paper proposes a stochastic variational inference (SVI) method for computing an approximate posterior path measure of a Cox process. These processes are widely used in natural and physical sciences, engineering and operations research, and represent a non-trivial model of a wide array of phenomena. In our work, we model the stochastic intensity as the solution of a diffusion stochastic differential equation (SDE), and our objective is to infer the posterior, or smoothing, measure over the paths given Poisson process realizations. We first derive a system of stochastic partial differential equations (SPDE) for the pathwise smoothing posterior density function, a non-trivial result, since the standard solution of SPDEs typically involves an It\^o stochastic integral, which is not defined pathwise. Next, we propose an SVI approach to approximating the solution of the system. We parametrize the class of approximate smoothing posteriors using a neural network, derive a lower bound on the evidence of the observed point process sample-path, and optimize the lower bound using stochastic gradient descent (SGD). We demonstrate the efficacy of our method on both synthetic and real-world problems, and demonstrate the advantage of the neural network solution over standard numerical solvers. | [
"Cox process",
"variational inference",
"stochastic differential equation",
"smoothing posterior density"
] | Reject | https://openreview.net/pdf?id=snaT4xewUfX | https://openreview.net/forum?id=snaT4xewUfX | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"17_K3juMYSR",
"ErYNOvdSXK0",
"6um-uRxyGxd",
"gushUQKdu19",
"ObdL0M4CSQy",
"xZHOtFZSujK",
"3S9O1xujXTc",
"BL6joWsS59",
"BpJCfvKaThx",
"keGz9unhkJE"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040384352,
1606291573355,
1606266338361,
1606119239994,
1605293487686,
1605290157233,
1605288032057,
1603858495783,
1603801173865,
1603704208474
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3436/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3436/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3436/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3436/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3436/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3436/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3436/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3436/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3436/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"The paper presents a stochastic variational inference method for posterior estimation in a Cox process with intensity given by the solution to a diffusion stochastic differential equation. The reviewers highlight the novelty of the approach. Some of the concerns with regards to clarity have been addressed by the authors satisfactorily.\\n\\nHowever, an important issue of the approach is that of estimating model parameters, which the authors do not address explicitly by simply referring to that as the task of the modeller. I believe this is an important issue and, although some of the parameters can be estimated along with the neural network parameters, this has not been shown empirically. Along a similar vein, the paper only presents results on a single real dataset (the bike-sharing dataset), which questions the applicability of the approach and no other baseline method is presented. At the very least, the authors should have provided an objective evaluation to other doubly stochastic point process models, e.g. based on Gaussian processes, where modern stochastic variational inference algorithms have been presented.\"}",
"{\"title\": \"Thanks\", \"comment\": \"This should be good enough. Thanks for taking my remarks into account.\", \"note\": \"would it really make the presentation much more complex to include the estimation of $h$ and $b$ in the paper?\"}",
"{\"title\": \"Thank you for your comment\", \"comment\": \"We agree that often, the functions $b(\\\\cdot), \\\\sigma(\\\\cdot)$ and $h(\\\\cdot)$ are known only up to an unknown parameter (say $\\\\theta$) that needs to be learnt in a data-driven manner. Learning the parameters of $h$ and $b$ is relatively straightforward, they can be learnt along with the neural network weights using stochastic gradient descent. Learning the parameters of $\\\\sigma$ presents a slightly greater challenge since the path measures for different settings of $\\\\sigma$ are singular. Learning this is a topic for future research. We have added a paragraph before Sec 5.2 on page 8 in the updated draft to clarify more on this point.\"}",
"{\"title\": \"Some clarification on Major points\", \"comment\": \"Thanks you very much for taking into account all of my comments. Concerning the the major points:\\n\\n1. It is now clear maybe at the top of page 5 use 'we choose the subset of absolutely continuous' instead of 'a subset'.\\n\\n2 and 3. I understand that further parametric/modelling assumptions are necessary depending on the context of the application. However, my comment is not to give detailed explanation on how to make inference of such parameters but to at least explain if this is possible/impossible and then how/why.\\n\\nIf that is not possible, then this would be a strong limitation of the methodology because any 'modeller', as you call them, would not be satisfied by 'heuristically setting' $\\\\sigma$ to an arbitrary value which then strongly influence the posterior in (11). Estimating and quantification of such parameters must be possible to ensure that the presented methodology has any kind of practical use and should at least but shortly discussed.\"}",
"{\"title\": \"Thank you for your positive comments about our work.\", \"comment\": \"We have uploaded the revised paper incorporating all the comments and a copy (as supplementary) with changes highlighted in blue. Please let us know if you have any questions or comments.\"}",
"{\"title\": \"Thank you for your comments and positive feedback.\", \"comment\": \"Please find our response to your questions/comments below:\\n\\n##### Major points\\n1. It is not clear how equation (13) is a .......be nice here. \\\\\\nR. Thank you for your comments. You are correct in pointing out that the class of measures induced by (13), denoted as $\\\\mathcal{Q}$$\\\\bar b$, cannot be a subset of the measures induced by the SDE in (2); and we are also not claiming this, as this is false. In the sentence above eq (13), we say that $\\\\mathcal{Q}_{\\\\bar b}$ is a subset of all absolutely continuous measure with respect to $\\\\Pi_0$, which is a larger class of measures that also includes $\\\\Pi_0$. Since the measure induced by SDE (13) is absolutely continuous with respect to $\\\\Pi_0$, therefore it is also included in set $\\\\mathcal{P}(\\\\mathcal{C})$. We have updated the paragraph after eq. (13) to clarify this fact.\\n2. The proposed framework requires to ...... in more detail.\\\\\\nR. Thank you for your comments. $b(\\\\cdot)$, $\\\\sigma(\\\\cdot)$, and $h(\\\\cdot)$ are the parameters of the model. Setting parameters is the task of the modeler, while our focus is mostly on inference given the model. In the interest of clarity, we decided not to include learning these in our paper. Instead, we heuristically set $\\\\sigma(x)$ to $1.1$ in the Bike sharing experiment as the count observations had higher variance than the first experiment as evident from the last plots in Figures 1 and 2 respectively. Moreover, we should point out that the target drift is not $b$, but in fact the target is the intractable drift derived for the smoothing SDE in eq. (11). \\n3. Similarly to the previous points,.......explain why?\\\\\\nR. Please refer to our response above.\\n\\n##### Minor points\\nThank you for pointing these minor but important errors. We have corrected all of them in the revised paper.\\n\\nR1. We have added an original reference. (page 1)\\\\\\nR2. Removed 'that'. (page 2, para 4, line 2)\\\\\\nR3. Defined VBSP (page 2, para 4, line 4)\\\\\\nR4. Corrected: 'Wiener' . (line 2 after eq.(2))\\\\\\nR5. Definition of $\\\\mathbb{E}^{\\\\dagger}$ added. (line 2 after eq.(3)) \\\\\\nR6. We have added the definition of the Indicator function. (line 1 after eq.(4))\\\\\\nR7. This is a typo, it should be $\\\\bar v_t(x)$. We have corrected it. (line 1 on page 4) \\\\\\nR8. Corrected reference to $h(\\\\cdot)$. (line 1 after eq.(6))\\\\\\nR9. We have used two separate notations now $f$ and $F$. (line 1 after eq.(7))\\\\\\nR10. We have ensured that eqref is used to refer equations.\\\\\\nR11. Defined VB. (line 1 after eq.(17))\\\\\\nR12. Specified the class of parametric functions used in Sutter et al. (Sec 4, para 1, line 6)\\\\\\nR13. Defined $\\\\Psi_t'$ . (line 5 on page 7)\", \"note\": \"We have uploaded the revised paper incorporating all your comments and a copy (as supplementary) with changes highlighted in blue.\"}",
"{\"title\": \"Thank you for your valuable comments.\", \"comment\": \"Please find our response below:\\n1. Sometimes...... $\\\\mathbb{E}$ and E, $\\\\mathbb{R}$ and R \\\\\\nR. Thank you for pointing this out. We have gone through all the notations and ensured that they all are consistent. \\n\\n2. The definition of some of the notations ........ about the process? \\\\\\nR. Yes your understanding is correct. Precisely, by $\\\\(\\\\mathbb{E}[x_t|N_{0,T}]\\\\)$, we mean $\\\\(\\\\mathbb{E}[x_t| \\\\sigma(\\\\mathbf{N}_u,0\\\\leq u \\\\leq T ) ]\\\\)$, where $\\\\sigma(\\\\mathbf{N}_u,0\\\\leq u \\\\leq T) $ is the smallest sigma algebra generated by count observations from time 0 to $T$. We have added a sentence after eq. (3) to explain this fact.\\n3. Besides, please give a brief ....... Cox process.\\\\\\nR. We have added a paragraph on page 3 to briefly introduce smoothing posterior with additional references. Moreover on page 3, we have also added examples with references to illustrate the importance of diffusion modulated Cox processes in modeling various service and biological systems.\\n4. Please discuss the ...... is false.\\\\\\nR. Thank you for pointing this typo out. We meant that \\u2018h\\u2019 cannot be an identity function. We have fixed that statement. Moreover, the intensity function has to be non-negative as per the definition of the Cox process, therefore mapping $h$ has to be non-negative. We have added a sentence after eq. (1) for clarity.\", \"note\": \"We have uploaded the revised paper incorporating all your comments and a copy (as supplementary) with changes highlighted in blue.\"}",
"{\"title\": \"The idea of the SDE-modulated cox is interesting, but the writing needs to be improved.\", \"review\": \"This paper proposes an interesting point process named diffusion modulated cox processes, which generalizes the stochastic intensity to a stochastic differential equation. The variational inference method looks sound.\", \"pros\": [\"The generalization of SDE-type intensity is novel. The proposed stochastic variational inference makes sense. Especially the neural network solution is meaningful and will have an impact on the learning of point processes.\", \"Good empirical performance and analysis.\"], \"cons\": [\"I strongly recommend the authors to further improve the presentation of the current draft. Sometimes the notations are not consistent, like $\\\\mathbb{E}$ and $E$, $\\\\mathbb{R}$ and $R$. The definition of some of the notations are not very clear, such as Eq(3), what is the definition of the conditional expectation? Is it the expectation of the intensity function given all the information about the process? Besides, please give a brief introduction of the mathematical background, such as the smoothing posterior. It would also be better if the authors can give some examples to illustrate the advantages of the proposed diffusion modulated cox process.\", \"Please discuss the influence of the non-negativity of the intensity function. The paper claims that $h$ is non-negative and thus can be an identity function, which is false.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Solid contribution on point processes and variational inference\", \"review\": \"The paper provides a stochastic variational inference method for\\napproximating posterior path measures for doubly-stochastic Poisson processes\\nconditioned on realisations of paths of the Poisson process. The intensity\\nprocess is modelled as the solution to a diffusion stochastic differential\\nequation. The authors compare their method experimentally to the numerical\\nsolution of the associated system of stochastic partial differential\\nequations using the finite element method.\\n\\nThe paper, although technically difficult, is well written and clearly\\npresented. While I cannot validate the mathematical content in detail, the\\npaper seems technically correct. It deals with a well-defined problem in a\\ndifficult mathematical setting. \\n\\nThe idea of using variational inference for approximating the smoothing\\nposterior density seems well-founded. The method is experimentally validated\\non simulated and real data.\\n\\nIn summary, though my knowledge of point process theory is not sufficient to\\nevaluate all aspects of the paper in detail, I find the paper to be a solid\\ncontribution worthy of publication at ICLR.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"The paper is clear and well written with original and significant contribution.\", \"review\": [\"The paper under review proposes a variational inference procedure for a specific class of Cox processes whose intensity is derived from a stochastic differential equation. The methodology relies on a restriction of candidate solutions the the subset for which the drift depends on $x_t$, $N_t$ and $t$; the drift is then modelled with a neural network. By simulating from the candidate model, a sample average approximation of the ELBO is used to compute a stochastic gradient descent algorithm, optimize the bound and thus estimate non-parametrically the drift.\", \"The paper is clear and well written with original and significant contribution. The paper would benefit from clarifications about the following points:\", \"## Major points\", \"It is not clear how equation (13) is a subset of the solution of (2) in general more parameters implies more degree of freedom and thus the drift in (13) could look more general than $b(x)$ (the use the $\\\\bar{b}$ notation suggesting it refers more to (11) than (2)). This probably comes from the fact that all variables are indexed by $t$ but more details would be nice here.\", \"The proposed framework requires to specify both $b$ and $\\\\sigma$ a priori. It would be nice to explain what is the impact and restrictions of doing so. For instance, the first experiment set $\\\\sigma$ to $1$ and the second to $1.1$ without further justification. Could the author(s) elaborate(s) on this point? Similarly, the authors use a very flexible model (NN) for $\\\\bar{b}$, so the main restriction comes from the choice of the `target' $b$: the impact of this choice should be discussed in more detail.\", \"Similarly to the previous points, in the methodology, the link function $h$ must be set a priori (and its uncertainty is not taken into account). Would it be possible to have an inference methodology estimating not only the drift but also $h$ (and $\\\\sigma$?) at the same time? If not, is it possible to explain why?\", \"## Minor points\", \"p.1 l.1: the reference for Cox processes is not the 'original' work.\", \"p.2: 'in particular, we show that...' either 'that' or 'how'.\", \"p.2: VBSP has never been defined at this stage of the paper.\", \"p.3: Wiener.\", \"p.3: the notation $\\\\mathbb{E}^\\\\dagger$ is undefined.\", \"p.3: defined the indicator function that you are using.\", \"p.3: shouldn't the $x$ in the definition of $\\\\bar{v}(x_t)$ be bold?\", \"p.3: which 'section'? Simply 'above'?\", \"p.3: $f$ denotes first bounded measureable functions and then twice-differentiable ... Use two different notation to avoid confusion.\", \"In general, use eqref for equations.\", \"p.5: We call $Q^\\\\star$ as the VB ... Defined the meaning of VB.\", \"p.5: specify the class of parametric functions used in Sutter et al.\", \"p.6: the notation $\\\\psi_t'$ is not defined. The link with equation (13) could be made clearer.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
PUkhWz65dy5 | Discovering a set of policies for the worst case reward | [
"Tom Zahavy",
"Andre Barreto",
"Daniel J Mankowitz",
"Shaobo Hou",
"Brendan O'Donoghue",
"Iurii Kemaev",
"Satinder Singh"
] | We study the problem of how to construct a set of policies that can be composed together to solve a collection of reinforcement learning tasks. Each task is a different reward function defined as a linear combination of known features. We consider a specific class of policy compositions which we call set improving policies (SIPs): given a set of policies and a set of tasks, a SIP is any composition of the former whose performance is at least as good as that of its constituents across all the tasks. We focus on the most conservative instantiation of SIPs, set-max policies (SMPs), so our analysis extends to any SIP. This includes known policy-composition operators like generalized policy improvement. Our main contribution is an algorithm that builds a set of policies in order to maximize the worst-case performance of the resulting SMP on the set of tasks. The algorithm works by successively adding new policies to the set. We show that the worst-case performance of the resulting SMP strictly improves at each iteration, and the algorithm only stops when there does not exist a policy that leads to improved performance. We empirically evaluate our algorithm on a grid world and also on a set of domains from the DeepMind control suite. We confirm our theoretical results regarding the monotonically improving performance of our algorithm. Interestingly, we also show empirically that the sets of policies computed by the algorithm are diverse, leading to different trajectories in the grid world and very distinct locomotion skills in the control suite. | [
"set",
"policies",
"algorithm",
"performance",
"tasks",
"worst case reward",
"sips",
"sip",
"smp",
"grid world"
] | Accept (Spotlight) | https://openreview.net/pdf?id=PUkhWz65dy5 | https://openreview.net/forum?id=PUkhWz65dy5 | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"8TaoxGqYVaP",
"8PwZmjTaJGl",
"MaIYY1byCQa",
"aW2yBwLC3Hb",
"gS93_EoucxB",
"fyjAdsTpXor",
"H6BFLnJGGEL",
"WGleixvAFsu",
"hPekISQ_MRH",
"z0mhpzqQFy",
"siTr_0UF0vY",
"UeyCOamkc2x",
"NMZ9hywLQxe",
"FWZtEK2_tb",
"E1RGmrUN2P",
"M4zmYwbKKy1",
"nNtS773KdHC",
"-f-vjNDSLiV",
"xysqyfLEoyN",
"M0--Zyjkn1f",
"o7lbMx2EyM"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040509409,
1605886955915,
1605812886034,
1605799841554,
1605462575808,
1605453942851,
1605294872594,
1605294015419,
1605290899924,
1605289361659,
1605273549930,
1605273178158,
1605273088212,
1605272534552,
1605272512016,
1605271847854,
1605271667712,
1603931445955,
1603868577727,
1603735562321,
1603732767741
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3432/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3432/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3432/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3432/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3432/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3432/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3432/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3432/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3432/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3432/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3432/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3432/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3432/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3432/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3432/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3432/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3432/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3432/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3432/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3432/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Spotlight)\", \"comment\": \"All reviewers are positive or very positive about this work. The authors successfully addressed all questions. I believe this paper should be accepted.\"}",
"{\"title\": \"I've raised my score to accept\", \"comment\": \"I'd like to thank the authors for the detailed explanations. They made good points on the interest of studying the $\\\\ell_2$ ball rather than the $\\\\ell_infty$ one. As a consequence, I recommend accepting the paper.\"}",
"{\"title\": \"Response to Reviewer 2, cont.\", \"comment\": \"We would like to thank the reviewer for their response. The reviewer raised a good point. There are many reasons to choose the $\\\\ell_2$ ball: it includes all the directions; the magnitude of the reward doesn\\u2019t change the optimal policy in tabular MDPs; in robust optimization it is used when the uncertainty about some parameter is Gaussian and consequently the scaled $\\\\ell_2$ ball contains the true parameter with high probability; it is the standard assumption in related work and specifically in Apprenticeship Learning. We now highlight one more property that distinguishes the $\\\\ell_2$ ball from the $\\\\ell_\\\\infty$ ball and is in particular relevant to the reviewer\\u2019s question.\\n\\nImplied from the reviewer\\u2019s response, is that a possible solution to Eq 7 is to solve the following problem:\\n$$\\\\max_{\\\\pi\\\\in\\\\Pi} \\\\min_{w\\\\in\\\\mathcal{W}} \\\\psi(\\\\pi)\\\\cdot w. $$\\n\\nThis formulation is similar to Apprenticeship Learning (AL) without an expert, which is different from our approach that is hierarchical. We refer the reviewer to our latest response to Reviewer 3 for a discussion on the similarities between our approach and AL and to Sec B in the supplementary material for more details. \\nAs the reviewer suggested, in the case where $\\\\mathcal{W}$ is the $\\\\ell_\\\\infty$ ball, the internal minimization problem in the above has a single solution - a vector with -1 in all of its coordinates. However, with other norm balls the solution to the internal minimization problem is a function of the policy. In the case of the $\\\\ell_2$ ball, it is the negative SFs normalized. This is important, since it clarifies that solving the min-max AL problem is not as easy and typically requires solving an MDP in each iteration (see the reference for AL above for more details).\\n\\nNow, the fact that the worst case reward is a function of the policy (or the policy set) forces it to make a tradeoff -- it has to \\u201cchoose\\u201d the coordinates it \\u201cwants\\u201d to be more adversarial for. This tradeoff is what encourages the worst case reward to be diverse across iterations (w.r.t different sets) and as a result it induces a diverse set of policies. Diversity was an important goal in this work -- in addition to minimizing Equation 7, we were interested in the diversity of the policies that result from that process.\\nThank you for pointing this out. We hope that we answered your question. We will add this discussion when we introduce the $\\\\ell_2$ ball in the paper.\"}",
"{\"title\": \"Focus on the ball reward definition\", \"comment\": \"First of, sorry for the bad formatting, I fixed it. I've also noticed only now that we could use the latex math mode and I'll use it from now on. Finally, sorry for not having been able to answer before.\\n\\nThank you for your clarifications. They addressed most of my point but I am still confused about my first one, which is also unfortunately my main concern. Let me first recall my initial questioning:\\n- although the reward-set setting is quite general: within a ball in the feature space, it includes very unnatural cases that make the problem artificially too complex, in my opinion. Indeed, usually, one should consider that the worst case reward function $r_{min}$ in a reward function family R is the one that is minimal in every state-action-state : $r_{min}(s,a,s') = \\\\inf_{r\\\\in R} r(s,a,s')$. In this case, the solution to equation (7) is straightforwardly the single policy that optimizes the return on $r_{min}$. Please discuss more the interest of considering $r=\\\\psi \\\\dot w$ with w in a ball (it implies that if some feature takes sometimes positive and sometimes negative values, there is not clear $w_{min}$, and therefore no clear $r_{min}$).\\n\\nNow, indeed, I've been mistaken, because I was thinking about the $\\\\ell_\\\\infty$ ball (infinite norm), not the $\\\\ell_2$ ball (Euclidean norm). Can we agree that, if we take the $\\\\ell_\\\\infty$ ball instead, then the solution is trivial? So, why use the $\\\\ell_2$ ball? Is it closer to the model uncertainty we need to represent? If this is the case, what is the loss of using the $\\\\ell_\\\\infty$ ball instead?\"}",
"{\"title\": \"Response to authors\", \"comment\": \"Thank you very much for the clarifications, and for the updated paper and experiments. The authors' interpretation of risk/robustness as it relates to the framework is quite interesting. It could be interesting to see the emergence of \\\"safer\\\" behaviors of the agent on complex tasks such as driving, in the absence of the true reward, in future work. I am happy with the response and do not have further questions.\"}",
"{\"title\": \"Response to authors\", \"comment\": \"I would like to thank the authors for the clarifications. I am happy with the response, as I believe it addresses the issues that I raised in my review.\"}",
"{\"title\": \"Response to reviewer #2\", \"comment\": \"We would like to thank the reviewer for replying to our response and for increasing their score. We appreciate your feedback and believe that it helped us to improve the paper.\", \"regarding_the_clarification_question\": \"indeed, these are the tasks in the grid world from Section 5.\"}",
"{\"title\": \"Response to authors #2\", \"comment\": \"Thank you for addressing the concerns in the above comments.\\nIt's definitely interesting the fact that the performance seems to be a bit better than the baseline, to be honest, I would have expected it to not perform as well.\\n\\nJust to clarify, these are the same tasks on the grid-world from Section 5, correct?\\n\\n\\nI'm adjusting my score based on your responses.\"}",
"{\"title\": \"Experiments on a test set of rewards\", \"comment\": \"Dear reviewers, we would like to thank Reviewers 1 and 4, for expressing interest in the performance of our algorithm on a test set of rewards. This is an interesting setup which we didn't consider when we submitted our paper. We have now uploaded a new version of the paper where we performed this experiment in the supplementary material Section D. We hope that this is what you referred to in your review, but if it isn't please let us know.\\n\\nFor your convenience, we also provide a short description of the experiment here. We trained our algorithm and the two baselines in the same manner as we did before. During evaluation, we tested the performance of each method on a holdout set of rewards that were sampled uniformly over the unit ball. The results suggest that our algorithm achieves better performance than the two baselines when measured on this set of unseen rewards. More importantly, to achieve the same level of performance, our algorithm requires significantly fewer policies than the baselines. For more details, please refer to the Supplementary D.\"}",
"{\"title\": \"Response to authors\", \"comment\": \"Thank you for taking the time to address the comments, I really appreciate the effort.\\nBelow are my comments to the response.\\n\\n- What is a use-case for this approach? So, you give an example of a robot learning locomotion in an unsupervised manner, and argue that this would help the agent prepare for worst-case in subsequent tasks. I'm trying to picture what this would look like in practice. Is what you are suggesting that by simply learning locomotion, the agent might not have covered scenarios that would encounter in follow up tasks, so worst-case performance in the locomotion skill could be poor enough that the new tasks becomes unlearnable?\\n\\n - Thanks for the clarification in the linear features. This makes sense.\\n\\n- Thanks for the clarification in figure 4. It would also be useful to see how the actual performance of the agent compares in traditional methods. The fact that the worst-case performance is better than for other methods would not be very useful, if the best-case performance is not good enough to complete the task. Even if for a few domains/task, I think this figure must be there.\\nIn other words, by ensuring that our worst-case performance is not too bad...what are we losing from the best case performance. Depending on the scenario, it might be acceptable or it might not.\\nIf time permits, please include such results.\\n\\n- On Lemma 3...thanks for the clarification. I see now how that would be useful.\\n\\n- I understand the time constraint for providing the one baseline I suggested. If you can include that, it will be very appreciated, but if you can't I won't hold it against you :)\\nIf you can only add one of the suggestions I have, please make it the comparison I suggested to add for figure 4.\"}",
"{\"title\": \"Revised version\", \"comment\": \"Dear reviewers, we have uploaded a revised version of our paper to reflect your comments. For your convenience, most of the changes are marked in blue color. Smaller notation changes were also fixed.\"}",
"{\"title\": \"Response to Reviewer 1, cont.\", \"comment\": \"Question 2: lemma 3.\\nGiven a set of policies, we have a lower bound on the worst case performance of the SMP. This is equivalent to computing the worst case reward w.r.t the current set and measuring the performance of the SMP. We do not, however, have a lower bound on what that value would be given that we run our algorithm for n iterations. That would indeed be a great contribution; we will mention it in the paper as a promising direction for future research. \\n\\nThat said, we want to emphasize that Lemma 3 is useful, as it provides a clear criterion for the convergence of our algorithm: whenever the upper bound provided in the lemma is achieved, we can stop adding more policies. We would like to point out that this in fact happens in practice, as we illustrate in our experiments (Figure 1a). This is surprising since often upper bounds are not attainable in practice, as the reviewer implied, which makes Lemma 3 even more relevant. \\n\\nQuestion 3 (experiments). \\nThis is an interesting suggestion, and we are working on performing this experiment, although we are not sure if we can make it on time. Our intuition is that that sort of a baseline will be less practical to use as it typically requires many iterations until it will be able to generalize to new tasks (e.g. UVFA [4]) while our algorithm performs well after a few iterations.\\n\\nQuestion 4, value in figures. \\nThe values shown in Figures 1a and 2a correspond to the SMP\\u2019s value, given in Definition 5. The way we compute it as follows. For each policy computed by our algorithm, we estimate the associated successor features (SFs) using Monte Carlo estimation: that is, we fix the policy and run at multiple times to estimate the SFs. We run it enough times to guarantee that the estimate is accurate (see, for example, theorem 2 in [2] or lemma 5 in [3], for a concentration bound on the approximation error for a given number of samples). Now, given a set o n policies pi (and their associated SFs), we compute the worst possible w, which we call w*, (Equation 8). We then compute the inner product between the n SFs and w* and pick the maximum of these n values (Definition 5). This is the value shown in the figures, which represent the worst possible performance of the set of n policies across all possible tasks. \\n\\nQuestion 5. \\nYes, you are correct: this SIP is exactly the SMP (Definition 3).\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"We would like to thank the reviewer for their feedback. Our response is split to two parts.\", \"question\": \"\\u201cwhat can be a use case of such an approach?\\u201d We believe that we presented an interesting learning framework in this work: in a world without an explicit reward, can an agent define goals to itself and discover interesting behaviors by doing so? Our experiments in DM control suggest that diverse and interesting locomotion skills (such as salta jumping) can emerge from following this process which is an interesting scientific observation. For a more concrete use case, imagine that a robot can teach itself how to move and locomote in an unsupervised discovery phase, later to use these skills when instructed to do more complicated tasks. Since the skills that the robot discovers prepare it for the worst case, some of them are likely to be useful in the future from a robustness perspective, that is, no matter what the robot discovered, we are guaranteed that the robot won\\u2019t do too bad when faced with a new task. Thus, the learned skills can also be used to initialize the robot\\u2019s behaviour, to be followed by a learning phase. Another interesting contribution of this work is the connection between robustness and diversity in RL: we have demonstrated that by optimizing for the worst case, a diverse set of skills emerges.\\n\\nQuestion 1 (the linearity assumption):\\n\\nIt is indeed true that for a *fixed* set of features the linearity assumption is restricting. However, it is this assumption that allows us to develop theory and thus provide theoretical guarantees for the proposed algorithm. Note that the guarantees we provide are in fact applicable in practice, as we illustrate in our experiments. For example, the experiments in the tabular MDP follow the theoretical framework exactly and achieve the upper bound that we developed (see more on that in our answer to point 2 below). The experiments in DM control are also very close to the theory, the only deviation is that we use DRL to optimize the policy for a given reward. We note that this agreement between theory and practice is in fact one of the strengths of our work, since more often than not there is a gap between the two.\\n\\nAll that said, we point out that in the most general scenario we are free to define the features themselves. Note that, although the rewards are linear in the features, the features themselves can be arbitrary nonlinear functions of state-action pairs. This means that when we are able to define the features the linear assumption is in fact not so strong: for example, in [1], the authors discuss how in the tabular case this is not a restriction at all, and how in continuous state spaces we can find features that approximate any reward function with a certain accuracy. \\n\\nAs a somewhat counterintuitive observation, we note that in many problems it is in fact easy to handcraft simple features that generate useful behaviour. This is illustrated in our experiments with DM control, in which using the standard features provided in the suite we were able to generate rich behaviour. For example, we were able to teach the walker to do salta jumps (see the video in the supplementary material) by simply using linear combinations of the standard features. \\n\\nThirdly, our experiments suggest that the assumption is not too restricting in interesting problems. For example, we were able to teach the walker to do salta jumps (see the video in the supplementary material) under this assumption. \\n\\nLastly, we believe that it should be easy to generalize our approach to a more general setup where the reward is represented as a nonlinear (perhaps a DNN) of the features. In this case, the minimization over w will not be convex but will still be possible via the same techniques (SGD). This kind of algorithm will resemble GAIL (with a GAN) and we believe that it is an exciting direction of research for future work.\\n\\n\\n[1] Barreto, A., Hou, S., Borsa, D., Silver, D., & Precup, D. \\\"Fast reinforcement learning with generalized policy updates.\\\" Proceedings of the National Academy of Sciences (2020).\\n[2] Abbeel, Pieter, and Andrew Y. Ng. \\\"Apprenticeship learning via inverse reinforcement learning.\\\" Proceedings of the twenty-first international conference on Machine learning. 2004.\\n[3] Zahavy, Tom, Alon Cohen, Haim Kaplan, and Yishay Mansour. \\\"Apprenticeship Learning via Frank-Wolfe.\\\" AAAI (2020).\"}",
"{\"title\": \"Linearity of the reward in the features\", \"comment\": \"It is indeed true that for a *fixed* set of features the linearity assumption is restricting. However, it is this assumption that allows us to develop theory and thus provide theoretical guarantees for the proposed algorithm. Note that the guarantees we provide are in fact applicable in practice, as we illustrate in our experiments. For example, the experiments in the tabular MDP follow the theoretical framework exactly and achieve the upper bound that we developed (see more on that in our answer to point 2 below). The experiments in DM control are also very close to the theory, the only deviation is that we use DRL to optimize the policy for a given reward. We note that this agreement between theory and practice is in fact one of the strengths of our work, since more often than not there is a gap between the two.\\n\\nAll that said, we point out that in the most general scenario we are free to define the features themselves. Note that, although the rewards are linear in the features, the features themselves can be arbitrary nonlinear functions of state-action pairs. This means that when we are able to define the features the linear assumption is in fact not so strong: for example, in [1], the authors discuss how in the tabular case this is not a restriction at all, and how in continuous state spaces we can find features that approximate any reward function with a certain accuracy. \\n\\nAs a somewhat counterintuitive observation, we note that in many problems it is in fact easy to handcraft simple features that generate useful behaviour. This is illustrated in our experiments with DM control, in which using the standard features provided in the suite we were able to generate rich behaviour. For example, we were able to teach the walker to do salta jumps (see the video in the supplementary material) by simply using linear combinations of the standard features. \\n\\nThirdly, our experiments suggest that the assumption is not too restricting in interesting problems. For example, we were able to teach the walker to do salta jumps (see the video in the supplementary material) under this assumption. \\n\\nLastly, we believe that it should be easy to generalize our approach to a more general setup where the reward is represented as a nonlinear (perhaps a DNN) of the features. In this case, the minimization over w will not be convex but will still be possible via the same techniques (SGD). This kind of algorithm will resemble GAIL (with a GAN) and we believe that it is an exciting direction of research for future work.\\n\\n\\n[1] Barreto, A., Hou, S., Borsa, D., Silver, D., & Precup, D. \\\"Fast reinforcement learning with generalized policy updates.\\\" Proceedings of the National Academy of Sciences (2020).\\n[2] Abbeel, Pieter, and Andrew Y. Ng. \\\"Apprenticeship learning via inverse reinforcement learning.\\\" Proceedings of the twenty-first international conference on Machine learning. 2004.\\n[3] Zahavy, Tom, Alon Cohen, Haim Kaplan, and Yishay Mansour. \\\"Apprenticeship Learning via Frank-Wolfe.\\\" AAAI (2020).\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"We would like to thank the reviewer for their constructive comments, we are sure that our changes to the manuscript following their suggestion have improved the quality of this work.\\n\\nPlease note that we made substantial changes to the paper following your suggestions. This includes addressing the $\\\\arg\\\\max$ in Definitions 3&4, and in Eq 5, and switching the notation in Eq 7. Regarding the question \\u201cwhere does $\\\\Pi$ live in?\\u201d: this is a good point, since the original submission confusingly used the same notation $\\\\Pi$ for the subset of policies and the set of all the policies in the MDP. We corrected that in the revised draft, such that it's clear that we optimize over a subset $\\\\Pi^n$ of $\\\\Pi$ where $\\\\Pi$ is the set of all the policies. \\n\\nScalar value functions. Thank you for the comment, we have clarified the notation regarding the value in Equation 3 to clarify that it is the expected value under the initial state distribution of the value function.\", \"question\": \"\\u201cwhy there is not another w that makes this policy fail\\u201d, this is exactly what our proof shows. We would like to refer the reviewer to the definition of the features in the second paragraph of the preliminaries section. From there, it is clear that the features are always positive, and are for dimension d. The later answers the reviewer question regarding d in Lemma 3. The former is important for the proof of the uniqueness of the worst case reward. We hope that Lemma 5 in the revised version will answer the reviewer\\u2019s concerns. Note that the places that were changed are in blue color. We explain some parts clearer and more rigorously than in the previous version and we hope that you will find that satisfactory.\\n\\nRegarding the reward-set setting. Please note that we defined the features to be positive. Although the features are positive w can be negative and in general will be negative. So, for a given set of policies, the solution for the worst case reward is not just the min reward at each state. For example, if your set includes only the policy $(1,0)$ then the worst case reward will be $(-1,0)$ and not $(\\\\frac{-1}{\\\\sqrt(2)}, \\\\frac{-1}{\\\\sqrt(2)})$. We hope this addresses your concern regarding w_min. Regarding your question about the linearity in the features, see our answer in a separate post(note that this answer is also an answer to R1). \\n\\n\\u201cwhy optimize the policy set according to SMP?\\u201d This is a good question. The focus of this work is on the case that the reward is unknown. Therefore, if our set only includes a single policy, then the worst case reward w.r.t to it will always be \\u201cdevastating\\u201d. Mathematically, this means that if the set of the policies is not diverse, then the worst case reward can choose to be minus the SFs of one of the policies (normalised), that is, to be as adversarial as possible w.r.t a single policy. When the set includes more than one policy, then the policies, under an SMP may complement each other. That is, if the reward is too adversarial w.r.t to a single policy, then it is likely that another policy in the set will be better w.r.t it. This is what happens in practice, when there is more than one policy that maximizes the worst case reward (active policies). In that case, the analytical solution of the worst case reward is not minus the SFs of one of the policies in the set. As a result, the value of the SMP w.r.t the worst case reward is better than that of that of the best policy in the set in isolation. \\n\\n\\u201cstick to the simplest setting of choosing the best policy\\u201d -- We revisited the problem formulation paragraph to make it clearer that we focus on the SMP as the mechanism that select policies. Please note that once algorithm 1 finishes and returns a set of policies, this set can be used by other SIPs (such as GPI) to yield better performance. We verify that empirically in Figure 1a. \\n\\nWe hope that our response addressed all the reviewer\\u2019s concerns, but if it didn\\u2019t, please point us to the parts that we missed.\"}",
"{\"title\": \"Response to Reviewer 4\", \"comment\": \"We would like to thank the reviewer for they/there feedback.\\n\\nQuestion 1.\", \"the_reviewer_is_correct\": \"for the baselines, we add policies to the set by following the baseline rule (either by sampling a random reward and optimizing it or by adding an orthogonal reward and optimizing it). For each method (including ours and the baseline) we have a different set of policies $\\\\Pi^t$ in each iteration. At iteration t we compute the worst-case reward w.r.t to SMP\\u2019s current set, $w^*(\\\\Pi^t)$, and report the performance of all the methods under this reward ($w^*$ is computed through (8)). This means that the reported performance of SMP is the worst possible across all the tasks, while this is not necessarily true for the baselines (that is, there might be tasks different from $w^*$ in which their performance is worse).\\n\\nThe reviewer\\u2019s suggestion of adding a \\u201ctest set\\u201d of tasks is interesting. We will do our best to add this experiment to the paper before the end of the rebuttal phase, and will certainly have it in the final version of the paper. As noted above, if we replace the task $w^*$ used in our evaluation with any other task, the performance of our algorithm (SMP) will improve. So, for example, if we report the average performance of our algorithm on a \\u201ctest set\\u201d, as suggested, this value will be greater than the one reported. This is not necessarily the case for the baselines. On the other hand, the performance of the baselines on the test set can in fact be better than SMP\\u2019s, since this is not what our algorithm is optimizing. The choice between maximizing the worst-case performance and the expected performance involves several interesting trade-offs; we elaborate on this point below. \\n\\nQuestion 2.\\nThis is also a very interesting suggestion, it would indeed be an interesting reference point. Such an algorithm should work well if we have some prior knowledge of the distribution of w or are able to sample from it (for example, by learning online as tasks are presented to the agent). However, we conjecture that the number of policies needed by such an approach to reach a good performance level would in general be considerably larger than the number of policies used by our algorithm (since in this case one has to cover all the support of the distribution over w with non-negligible probability mass).\\n\\nOptimizing for the worst case scenario gives us some benefits. First, we do not need to know anything about the distribution of w. This allows us to do things like building the library of policies in a completely unsupervised way, before ever seeing an actual task. Second, we can be very efficient: in our experiments we always found a set of diverse policies after only a few iterations. Focusing on the worst-case reward can also be very useful in scenarios where bad performance has a high cost associated with it: for example, if the agent is an autonomous vehicle, the priority might be to avoid accidents. \\nAll that said, we believe that the two approaches (optimizing for the worst-case or expected performance) are in fact complimentary. Finding a feasible way to cover the space of rewards and combine it with our approach is an exciting direction for future work. \\n\\nWe will add the discussion above to the paper.\\n\\nQuestion 3.\\nThis is related to the previous question, and also an interesting point. We did observe in our experiments that sometimes our algorithm converged to the optimal value \\u201ctoo fast\\u201d. Concretely, this meant that after adding 2-4 policies to the set, newly-added policies were not diverse or meaningful because w*, the worst-case reward w.r.t the SMP computed through (8), was very close to being a vector with -1 in all of its coordinates. That is, after a few iterations, the benefit of adding a new policy diminished quickly. One direction that we explored to alleviate this issue was to regularize the worst-case reward w* to have zero mean. Note that removing the mean does not change the task but potentially increases the difference in the relative magnitude of the entries in w*. This did indeed help in making the policies more diverse. These experiments have been added to the supplementary material (Section C).\\n\\nQuestion 4.\\nWe used a simple actor-critic agent with experience replay to learn each policy. We experienced both with the case where the parameters are learned from scratch and with the case where they are transferred from one task to another, but it did not seem to make a big difference (same for the experience replay). We did not use SMP or GPI to learn the new task, although this is a great idea to be explored in the future (we will mention it in the paper).\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"We would like to thank the reviewer for they/there feedback. The reviewer raised a question about the policies learned by following our algorithm: how do they relate to Apprenticeship Learning (AL), and can we better show if they learned similar/different things from one another.\\n\\nRelation to AL. This is a great question and we debated about it when working on this paper. Both AL and our algorithm can be used to solve the same goal: achieve good performance w.r.t the worst case reward. However, AL is concerned with finding a single policy, while our algorithm is explicitly designed to find a set of policies. To be more specific, we need to refer to a specific AL algorithm, so we will refer to the projection algorithm by Abbeel & Ng (04). The basic idea in this algorithm is that given an estimate of the SFs of the expert (we use our notation though in their paper they refer to a similar quantity as feature expectations) we want to find its projection onto the SFs polytope as well as a policy whose SFs equal this projection. We refer the reviewer to Sec B in the supplementary material for more details, but provide a short summary here for the following discussion. The algorithm achieves that by maintaining a set of policies, and convex combination of coefficients over the set of policies, known as a mixed policy, such that its SFs are the convex combination of the SFs in the set. The goal is to find the convex combination whose corresponding SFs are closest to those of the expert in the L2 norm. In each iteration, a policy is added to the set by maximizing a reward signal that is defined to be the negative gradient of this objective. \\n\\nThere is no direct connection between the policies that are discovered from following these two processes. This is because the intrinsic rewards that are maximised by each algorithm are essentially different. Another way to think about this is that since the policy that is returned by AL is a mixed policy, its goal is to return a set of policies that are similar to the expert, but not diverse from one another. From a geometric perspective, the policies returned by AL are the nodes of the face in the polytope that is closest to the demonstrated SFs. Even more concretely, if the SFs of the expert are given exactly (instead of being approximated from trajectories), then the AL algorithm would return a single vertex (policy). Finally, while a mixed policy can be viewed as a composition of policies, it is not a SIP. Therefore, it does not encourage diversity in the set. Our algorithm, on the other hand, is explicitly designed to return a diverse set of policies. \\n\\n\\u201cDescription of the types of policies that the different rewards lead to and how the policy computed by the proposed approach relates (or differs) from them\\u201d. We would like to refer the reviewer to Figure 3, where we visualize the policies learned by our algorithm in DM control. There we show a snippet of the trajectories taken by different policies. In the supplementary material, we further provided videos that we recorded from these policies. So, for example, the pendulum balances itself up and down, and the cheetah tries to stand on either leg, or to walk in either direction. The walker and the hopper discovered other locomotion skills that are not expected, but the key finding is that they are indeed very diverse from one another. We hope that this is what the reviewer asked for, but in case the reviewer believes that there are some missing details, or in case that we didn\\u2019t answer all of they/there questions, please let us know what you think is missing and we would provide more details.\"}",
"{\"title\": \"The paper addresses an interesting problem, and the proposed approach is clear and well formalized. The paper analyzes the proposed approach theoretically as well as empirically, attaining good results.\", \"review\": \"= Overview =\\n\\nThe paper introduces an approach that, given a set of \\\"basis\\\" policies, constructs a high-level policy from the basis policies that is able to perform well in a variety of distinct (but related) tasks. Such tasks are described by MDPs with similar state-action spaces and similar dynamics, and differing only on the reward functions, all of which are built as a linear combination of common features.\\n\\nGiven a set of policies, the paper introduces the notion of \\\"set improving policy\\\" as a policy that outperforms any policy in the given set on the family of considered tasks. It provides two examples of such policies (SMP and GPI) and formalizes the problem of computing a SIP with maximal worst-case performance on the set of considered tasks as a max-min problem. It then contributes an incremental algorithm for this problem. The proposed approach is tested in a grid-world environment and the DM control suite.\\n\\n= Positive points =\\n\\nThe paper is very well written, with the proposed approach clearly motivated, presented and analyzed. The proposed approach is novel, to the extent of my knowledge, and analyzed both theoretically and empirically. \\n\\n= Negative points =\\n\\nMy main criticism is, perhaps, some lack of detail on the experimental evaluation -- particularly in the DM control suite.\\n\\n= Comments = \\n\\nOverall, I really enjoyed reading the paper. The problem addressed -- that of building a policy that performs well in a number of related tasks from a set of \\\"simpler\\\" policies -- is, in my view, quite relevant for the RL community, and has potentially interesting applications in domains such as robotics.\\n\\nThe proposed approach is, as far as I know, original and contributes to the state of the art. The paper briefly links its contributions to the existing literature on apprenticeship learning and hierarchical RL, but I would have appreciated some more discussion on these topics -- particularly, I'd like to better understand how the learned policy relates with policies taught through apprenticeship learning.\\n\\nOverall, the ideas in the paper are presented in a very clear and elegant manner and the results strike me as technically sound. The proposed approach focuses on building a set of \\\"basis\\\" policies in such a way that the policy built from them performs as well as possible in all the considered family of tasks. The method is derived from first principles, and the performance bounds provided (framed in terms of the performance of the SMP policy) are then validated empirically. \\n\\nFinally, the paper is evaluated in a smaller grid-world domain and in the DM control suite. One aspect that could, perhaps, be improved is concerned with the description of the empirical evaluation in the DM control suite: the paper does describe how the family of rewards for these tasks were built, but it would be good to provide some description of the types of policies that the different rewards lead to and how the policy computed by the proposed approach relates (or differs) from them.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Strong and non-trivial theoretical contributions, interesting empirical insight that connects directly to the theory\", \"review\": [\"Summary: the authors propose to solve a family of related tasks with shared features and rewards that are linear in the features and equivalent up to scaling factor. The main contributions are as follows:\", \"a novel framework for analyzing a broad family of generalized policies (policies that are generalized to arbitrary rewards in the task space), including the concept of a set improving policy (SIP), and providing two practical examples that fit this definition, namely the worst case set max policy (SMP) and the well known and studied generalized policy iteration (GPI). It is shown that it is always better to use GPI over SMP, making it an instance of SIP.\", \"a novel iterative method for building a policy library for solving the worst-case reward, formulated as a convex optimization problem, along with policy improvement guarantees, an informed method for stopping the algorithm, and the ability to remove redundant policies (termed inactive policies)\", \"an empirical evaluation that connects the proposed method to learning a policy library with a diverse set of skills. The theoretical results are also validated experimentally, on a grid world example and control problems from Deepmind.\"], \"pros\": [\"the work is of very high quality, all motivations seem sound and the theoretical results seem correct\", \"the idea of active task selection for building the policy library is very interesting, and it is surprising that this has not been considered within the framework of Barreto et al., 2017 so far\", \"the work could be of significance in the apprenticeship/curriculum/meta-RL community, and it is nice to see a more theoretical treatment of this topic\"], \"questions\": [\"If my understanding is correct, the authors use the orthogonal and random basis to propose w at each iteration, but evaluate the resulting SMP policies with respect to the optimized rewards from (8). I am wondering if this is a fair evaluation for the baselines, given that the policies are always evaluated on $w_t^{SMP}$, or whether a new set of tasks (a proper \\\"test\\\" set) sampled from B (the standard ball) should be used to fairly compare (8) with the baselines? This would really test the generalization of the method on new instances as well, and is also often standard in the literature for evaluating the performance of a learning policy set. In other words, how robust is the resulting policy library to solving new task instances not previously seen before?\", \"Also, one thing that could explain the poor performance of the orthogonal baseline is that the reward seems to be quite sparse when most of the basis elements are set to zero (in the one-hot phi case, wouldn't they be almost always uninformative?) In this case, a more suitable baseline that directly targets diversity could be defined as finding the $w_1, w_2 \\\\dots w_T$ such that their coverage of the task space is maximized under some prior belief over w (e.g. the standard ball). If I am not mistaken, this problem is similar to the maximum coverage or voronoi tessellation problem, which could be solved in advance and then deployed. (e.g. Arslan, 2016)\", \"Performing well relative to the worst-case performance seems reasonable so that the agent does not do poorly on any one task, but it could also be overly conservative. That is, could there be situations where optimizing the worst case leads to the agent not successfully completing the desired objective (e.g getting stuck on locally optimal solution)?\", \"at each iteration when the new optimal policy is learned with respect to $w_\\\\Pi^{SMP}$, is the idea of SMP or GPI and previously learned policies used to help learn this new policy, or is it learned entirely from scratch (e.g. by simple epsilon-greedy)?\"], \"minor_comments\": \"- the legends in Figure 1a/b and the axis font in Figure 1c could be increased, same with Figure 2\\n- is the $\\\\max_i$ necessary in equation (8)?\\n\\nOverall, this works proposes a coherent theory for policy improvement, that also leads to useful implementation and interesting empirical insight (and cool visualizations). It can often be hard to obtain all of these at once.\\n\\nArslan, Omur, and Daniel E. Koditschek. \\\"Voronoi-based coverage control of heterogeneous disk-shaped robots.\\\" 2016 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2016.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Official Blind Review #3\", \"review\": \"Given a rewardless environment MDP, the authors want to find a set of policies for the worst case reward function. Their process involves two steps: first to select the right set of policies and second to combine them to generate a new policy. The policy selection is made with the only goal to maximize the expected return of highest achieving policy of the set in the worst-case reward function (Equation (7)).\", \"unfortunately_the_submission_suffers_from_several_serious_weaknesses\": [\"although the reward-set setting is quite general: within a ball in the feature space, it includes very unnatural cases that make the problem artificially too complex, in my opinion. Indeed, usually, one should consider that the worst case reward function r_min in a reward function family R is the one that is minimal in every state-action-state : r_min(s,a,s') \\\\eqdef \\\\inf_{r\\\\in R} r(s,a,s'). In this case, the solution to equation (7) is straightforwardly the single policy that optimizes the return on r_min. Please discuss more the interest of considering r=\\\\psi.w with w in a ball (it implies that if some feature takes sometimes positive and sometimes negative values, there is not clear w_min, and therefore no clear r_min).\", \"besides ensuring that SMP performance is at least achieved, could the authors elaborate a bit more on why optimize the policy set according to SMP?\", \"the authors never theoretically consider combining the policy, apart for stating that a good combination of policy should achieve higher performance than the best of the policy set. For clarity, I would recommend to either stick to the simplest setting of choosing the best policy given a reward function, or to consider policy election that take into account the way the policies are going to be used/combined.\", \"the formalization is messy and sometimes unnecessarily confusing. Please see the series of comments below:\", \"Definition 3 and Eq. 5: the argmax returns an index, not a policy. (minor)\", \"v is not a value, it's a policy performance. It has been very confusing to me, as it led me to think for too long that Lemma 1 was false: choosing the policy that maximizes the value in each state is a form policy improvement that may lead to policies that are stricty better than the best of the policies. Also, I would not use the notation v, that is usually the value-function: a function of the station and not the policy performance like here, i.e. the expectation of the value-function over the initial states distribution. (easy to fix)\", \"Definition 4: the argmax returns an action not a policy. (minor)\", \"Equation 7: \\\\Pi lives in? Also instead of max_{\\\\psi\\\\in...} \\\\psi.w, I would use max_{i\\\\in [n]} \\\\psi^i.w. (minor)\", \"Lemma 3: what is d? (minor)\", \"I am still not understanding Definition 6 and Theorem 2. How do we know that the worst-case reward is unique? If we keep only the policies that achieve max performance on \\\\overline{w}, then we probably only keep one? How do we ensure that there is not another w that makes this policy (or set of policies) to fail?\", \"For all these reasons, I recommend to reject the submission.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Overall interesting idea. Need some clarification.\", \"review\": \"This was a well-written, and interesting paper to read!\\n\\nI went over the paper many times, and I am still failing to see the use case for such an approach. I have some questions and comments that need some clarification for me to properly evaluate the submission. Please, take the time to answer the following, so my review can better reflect the paper.\\n\\n1 - The theory developed in the paper relies on reward functions that can be represented as linear combinations of the features of the MDP. This seems to be restrictive, and intuitively, this would be the exception rather than the rule.\\nWhat class of problem could be modeled under this restriction? In many problems, there is no linear reward function that would allow an agent to achieve the desired behavior, so these techniques would not be helpful. What are some practical setting where this approach would be beneficial?\\n\\n2 - Lemma 3... this statement is putting an upper-bound on the worst case performance, but since the paper focuses on improvement of worst case performance, it would be beneficial to have a lower bound, but an upper bound doesn't seem too useful. Essentially, this lemma is saying \\\"I can guarantee that the worst-case won't be better than this upper bound, and that for some MDP with linear reward function this upper bound is attainable.\\\" The problem is that we don't know what that MDP is, how likely it is that we would find it, and this lemma allow for the worst case performance to be arbitrarily bad.\\nI don't think this lemma, as is, is particularly useful.\\n\\n3 - On the experimental section, I think there's a baseline that should be included that's missing. What if we have 1 policy and add a task descriptor or extra features to the features vector that corresponds to the type of task? How would the performance empirically compare?\\n\\n4 - In the learning curves for fig 1.a or 2.a, what does \\\"value\\\" (y-axis) represent? If it the return of the agent after training? If so, is it using the extrinsic reward or the transformed linear reward described in line 5 of \\\"DeepMind Control Suite\\\"?\\n\\n5 - Based on equation 4, for definition 2 of SIP. There is always a trivial set improving policy, right? That would correspond to picking the policy for max(v^i_w).\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
Qpik5XBv_1- | Language Controls More Than Top-Down Attention: Modulating Bottom-Up Visual Processing with Referring Expressions | [
"Ozan Arkan Can",
"Ilker Kesen",
"Deniz Yuret"
] | How to best integrate linguistic and perceptual processing in multimodal tasks is an important open problem. In this work we argue that the common technique of using language to direct visual attention over high-level visual features may not be optimal. Using language throughout the bottom-up visual pathway, going from pixels to high-level features, may be necessary. Our experiments on several English referring expression datasets show significant improvements when language is used to control the filters for bottom-up visual processing in addition to top-down attention. | [
"Referring Expression Understanding",
"Language-Vision Problems",
"Grounded Language Understanding"
] | Reject | https://openreview.net/pdf?id=Qpik5XBv_1- | https://openreview.net/forum?id=Qpik5XBv_1- | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"cfJJ7QDY7aa",
"OgJh8j0_RbH",
"nd9FRVs5_w1",
"D0WF1fjAEIM",
"X02BGSPzzR3",
"vr4cjmbkui",
"YpGeltReJ7D",
"CiQc9XzOPRR",
"bxS1-55Cv35"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040353031,
1606305559226,
1606305481439,
1606305373607,
1606305319404,
1604416709926,
1604040233817,
1603989876187,
1603901115914
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3429/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3429/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3429/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3429/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3429/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3429/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3429/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3429/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"The paper proposes to improve image segmentation from referring expression by integrating visual and language features using an UNet architecture and experimenting with top-down, bottom-up, and combined (dual) modulation.\", \"review_summary\": \"The submission received divergent reviews with scores spanning from 2 (R2) to 5 (R3,R4) to 10 (R1). The author response failed to address the reviewer concerns with some reviewer (R4) lowering their score tto 4 after the rebuttal. It also became clear that some relevant work (Mei et al, 2018) was used for the baseline but not cited. The author response also did not recognize the importance of significance tests.\\n\\nAs there is considerable work in the area of image segmentation from referring expression, and the proposed model is very similar to the LingUNet model of Misra 2018, the originality and significance of the work is fairly low. The main contributions appears to be experimental comparisons of the three types of modulation (top-down, bottom-up, dual).\", \"pros\": [\"Investigation of a important problem of grounding language to visual regions\", \"Experimental study of whether dual modulation improves image segmentation from referring expression\", \"Cons\", \"Relatively minor novelty with limited analyses (R3,R2)\", \"Missing citations (see R3's comments). Relevant work (Mei et al, 2018) which was the basis for the top-down baseline model, was used but not cited or properly compared against\", \"Relatively weak experimental results (R2,R4). As R4 noted, while validation results are good, test results are weak compared to existing work, indicating potential overtuning.\", \"No qualitative comparisons against baselines.\", \"Cognitive claims not backed up and limited discussion/analysis (R2)\"], \"recommendation\": \"The AC concurs with R2, R3, and R4 that the work is limited in novelty and not ready for publication at ICLR. Despite R1's high score, referring expression for image segmentation is a well studied task, and it is unclear what are the key innovations of the proposed model over LingUNet. Due to the limited novelty, relatively weak test results, as well as other flaws pointed out by the reviewers, the AC recommends rejection.\"}",
"{\"title\": \"Thank you for your constructive comments\", \"comment\": \"See the end of this section for an answer to 1-3.\\n\\n> The evaluations don't convince me that this paper has made a significant conceptual contribution.\\n\\nOur ablation analysis shows that modulating both bottom-up and top-down visual processing with language improves the performance over bottom-up-only (-1.22) IoU and top-down-only (-5.82 IoU) baselines. This is our main contribution in this study. We also showed that our model generalizes well enough on four different datasets achieving SOTA or near SOTA results. \\n\\n> The quantitative results don't seem to constitute an enormous improvement over past work. The variability in table 2 across evaluation sets makes me doubt the statistical significance of the claims. No statistical significance tests (across resamples of test data or across random training restarts) are provided.\\n\\nSee the previous comment also. We agree with you but we didn't perform any signifiance tests because of two reasons: (i) it'd take too much time, (ii) no previous work presented these results. We spent our time performing comprehensive ablation studies to show our main contribution. We also performed several experiments for \\u201cDual Modulation w/ 1x1 filters\\u201d model in ablation studies. Our model achieves 60.74 mean IoU for all the experiments with 0.06 standard deviation.\\n\\n> I don't closely follow the relevant literature and can't speak confidently on the originality of the model. I did have trouble understanding the innovation over Step-ConvRNN, however -- these models seemed within tweaking-distance of one another based on the presentation in this paper.\\n\\nThe cognitive science studies we cited show that language has an important role in visual processing and has an effect on the early stages of the visual processing. Those studies measure this effect by looking at the response times to the inputs that trigger the visual cortex that are known to correspond to the early visual process. However, they do not discuss how language affects the visual perception in detail (nobody yet knows). We also do not possess the answer to this question. We were inspired by the idea of language having an effect on the low-level visual processing and tried this idea on a related task with a model that we can modulate both top-down and bottom-up visual processing with language explicitly.\\nOn the other hand, the Step-ConvRNN work does not show the effect of top-down and bottom-up visual processing individually. They only performed experiments with both top-down and bottom-up visual processing at the same time. Their architecture is far more different/complicated than ours and it is also not suitable for this kind of ablation study.\"}",
"{\"title\": \"Thank you for your constructive comments\", \"comment\": \"> The major concern of the paper lies in the empirical results. Despite that the introduction of top-down and bottom-up language modulation significantly boost the baseline performance (Tab. 1), the full model struggles to match existing works on certain metrics such as UNC testA/testB, UNC+ testB, and ReferIt, which put a question mark on the effectiveness of the work. The results on the validation set are promising but not as good on the test set, which indicates a possible over-tuning of the model.\\n\\nThe main point of our paper is to argue for the effectiveness of modulating both top-down and bottom-up visual streams with language, which our results support. We do not believe there was any over-tuning: we only tuned the model (e.g., the number of layers, the number of filters, the size of the LSTM) looking at the results obtained on the UNC validation set. We use the other validation sets only for early stopping. This will be made more clear in the camera-ready version.\\n\\n> A minor comment on the model part. In the text above Eq. 1, the paper mentions \\\"[...] ,we split the textual representation [...]\\\". However, what is the rationale for splitting the representation since each split does not attach to any particular abstract of the image feature (low-level, mid-level, and high-level)?\\n\\nWe made this decision based on Mei et al (2018) which proposed our baseline model (the top down approach). We tried to use the final hidden state as a whole for language kernel generation in our preliminary experiments, however, we observed slight declines in the performance.\\n\\n> Besides, some numbers from Tab. 1 do not match those from Tab. 2. For instance, the IoU on LSCM and Step-ConvRNN. Please double check.\\n\\nWe have obtained those numbers from the corresponding studies. Step-ConvRNN presents results of different models (step=4 and step=5) for the ablation study and the SOTA comparison. We checked it again and we couldn\\u2019t find a reason for why the authors present different numbers for LSCM.\"}",
"{\"title\": \"Thank You\", \"comment\": \"Thank you for your time and review. We believe that our work demonstrates the importance of modulating both bottom-up and top-down processing with language for vision-language studies.\"}",
"{\"title\": \"Thank you for your suggestions\", \"comment\": \"Thank you for your suggestions. We implemented S2, S3 and S5 for right now.\", \"answer_to_q1\": \"We obtained results for the number of layers 2, 3 and 4. The depth=4 improves the performance slightly (1 IoU) over depth=3. In this architecture, the contradicting branch halves the input on each layer, which limits the number of layers that can be used in the model. Due to the size limit of the GPU, we haven\\u2019t experimented with larger number of layers.\", \"answer_to_q2\": \"One of the possible ways of interpreting the interaction between language and visual processing is clustering a specific layer\\u2019s language filters obtained for each phrase. Possibly, obtained clusters would give information about which language component (adjectives, prepositions, nouns etc.) has a role on which part of the architecture. Another possible way of inspecting the effect of language on the visual processing could be a word removing/masking experiment. In this experiment, checking the segmentation performance of the model after removing/masking a word would give some insights whether the model actually uses the words given in the phrase or not. Currently, we are working on both analyses. We also added an incremental analysis in the appendix.\", \"answer_to_q3\": \"We have experimented with bi-directional LSTM and self-attention over embeddings obtained from the last layer of a Bert model. In our preliminary experiments, these approaches didn\\u2019t improve the current model. Due to the memory constraints, we continued with the basic LSTM model. As suggested by the question, introducing an inductive bias by aligning feature maps and word tokens using the parse tree of the expression could improve the performance of the model or help the learning process. However, it also requires an understanding of how language works in visual processing (which part of the expression affects which part of the visual processing). In this study, we proposed an end-to-end approach where the model itself learns the connection between language and visual processing.\"}",
"{\"title\": \"good experimental setup, lacks the depth of novelty and analyses\", \"review\": \"This paper proposes to integrate visual and linguistic features in both top-down and bottom-up modulation of the visual input. This is done by fusing two modalities while doing convolution and deconvolution operations over the visual input. Experiments on image segmentation from referring expressions in standard datasets show that the proposed approach achieves state-of-the-art or competitive results. Ablation studies show that both top-down and bottom-up is essential. I believe the novelty and contribution are rather thin because many ways of the modeling language are not explored at all.\\n\\nBelow I list suggestions (S) and questions (q) for authors:\", \"s1_second_paragraph_of_introductions\": \"please add a figure to explain the concepts of top-down, bottom-up processing, high-level, low-level effects etc.\\n\\nS2 Section 2.2.: Please cite the below papers [1,2] for referring expression comprehension. For Section 2.4 please add [3]\", \"s3_figure1\": \"following this figure is not intuitive. I recommend adding two arrows for top-down and bottom-up processing and adding more space between two branches.\\n\\nS4 Section 4.2: It is not clear how each of these ablations was performed. For instance, I'm not 100% sure whether two modalities are fused at different levels of top-down or bottom-up processing.\\n\\n[1] Nagaraja et. al. \\\"Modeling context between objects for referring expression understanding.\\\" \\n[2] Cirik et. al \\\"Using syntax to ground referring expressions in natural images.\\\"\\n[3] Chen et. al. \\\"Touchdown: Natural language navigation and spatial reasoning in visual street environments.\\\"\", \"s4\": \"Section 4.3: the claim of modeling the long-range dependencies is a bit speculative. I would rephrase that.\", \"s5\": \"Figure2: Failure cases are more informative than successful ones. Please either bring the figure from A.1 or add a comparison with a model from the literature where the other model is successful where yours is not to do a contrastive analysis on how your model can be improved.\", \"q1\": \"Section3: What's the effect of the number of layers for the model? Why stop at 3? Do you have results for the number of layers 0,1,2?\", \"q2\": \"Is there a way to interpret the interaction between language and visual input?\", \"q3\": \"Have you experimented with different ways of fusing or processing language input? Examples: gating the language representation, attention over tokens, using different fusion methods, bi-directional LSTM, BERT-like contextual representations, adding inductive bias with parse tree for referring expressions, alignment between feature maps and word tokens or phrases?\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting idea; empirical results are ok\", \"review\": \"This paper concerns the problem of image segmentation from referring expressions. Given an image and a query phrase about a particular object in the image, the goal is to locate the target object as a mask at the pixel level. The basic framework is U-Net, which consists of two branches: an image encoder and segmentation map decoder (connected at the bottom in a U-shape). The paper proposes to use language to modulate the image encoding and decoding process intensively, by applying auxiliary convolutional connections between the two branches and further condition the convolution kernel on the language embedding. Overall, the paper is easy to follow and has done a good literature review.\\n\\nThe major concern of the paper lies in the empirical results. Despite that the introduction of top-down and bottom-up language modulation significantly boost the baseline performance (Tab. 1), the full model struggles to match existing works on certain metrics such as UNC testA/testB, UNC+ testB, and ReferIt, which put a question mark on the effectiveness of the work. The results on the validation set are promising but not as good on the test set, which indicates a possible over-tuning of the model.\\n\\nA minor comment on the model part. In the text above Eq. 1, the paper mentions \\\"[...] ,we split the textual representation [...]\\\". However, what is the rationale for splitting the representation since each split does not attach to any particular abstract of the image feature (low-level, mid-level, and high-level)?\\n\\nBesides, some numbers from Tab. 1 do not match those from Tab. 2. For instance, the IoU on LSCM and Step-ConvRNN. Please double check.\\n\\n============== Post-Rebuttal ==============\\n\\nThe authors' responses to point 1 & 2 do not sound (reflecting a question to another paper does not solve the problem). The authors mentioned \\\"We made this decision based on Mei et al (2018) which proposed our baseline model (the top down approach)\\\", where the reference of Mei et al (2018) cannot be found in the paper, as a critical baseline. This raises a flag on the novelty of the work and completeness of the related work. Therefore, I am lowering my rating to 4.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Very interesting paper\", \"review\": \"This article proposes a novel approach integrating language throughout the visual pathway for segmenting objects according to referring expressions.\\n\\nThe article is well written, and poses an important question about how best to integrate linguistic and visual information. The limitations of the currently dominant top-down approach are well argued. The answer proposed by the authors is to integrate linguistic information throughout the visual hierarchy. The task of segmenting by referring expression is important and well chosen. \\n\\nThe proposed model is sound, and well described in the article, and the experimental results demonstrate that the model outperforms clearly the state-of-the art in all metrics. The qualitative examples provided are quite impressive and demonstrate the success of the approach. \\n\\nIn sum, I feel this is a well written paper addressing a very timely and important problem in computer vision and AI research and should be of broad interest in the community.\", \"rating\": \"10: Top 5% of accepted papers, seminal paper\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Possibly worthwhile idea, but poorly motivated and evaluated\", \"review\": \"This paper presents a model for image segmentation from referring expressions which integrates linguistic representations of the referring expressions both at low-level and high-level stages of visual processing. They argue that this model is both more cognitively plausible and more successful than models which only use linguistic representations to modulate attention over high-level visual features.\\n\\nI vote for rejection, mainly on grounds of significance and quality, expanded below. To change my vote I would require a substantial improvement on one or both of the quality/significance issues listed below, either presenting the model with a clear conceptual motivation, or doing satisfactory model analysis to understand the contribution of the paper.\", \"pros\": \"The presented model shows some moderate quantitative improvement over other recent work.\", \"cons\": \"Poor conceptual motivation and error analysis, with little evident understanding of the actual effect of the linguistic representations within the model.\\n\\nQuality/Significance\\n\\n1. The paper does not provide a clear motivation for their model. Here are some arguments I looked for but did not find:\\n 1. Cognitively: There are some references to relevant cognitive science papers, but there seems to be little concrete inspiration taken from this or other cognitive work in the particular model design.\\n 2. A priori based on the task: What would we expect to gain from using language in early-stage visual representations? What sort of correlations might exist between particular types of linguistic input and low-level visual representations? This might be another way to motivate the model, but I can't find any such discussion in the introduction or anywhere else.\\n2. The evaluations don't convince me that this paper has made a significant conceptual contribution.\\n 1. The quantitative results don't seem to constitute an enormous improvement over past work. The variability in table 2 across evaluation sets makes me doubt the statistical significance of the claims. No statistical significance tests (across resamples of test data or across random training restarts) are provided.\\n 2. There is no satisfactory analysis of the actual cause of the model's success. What are the contents of the linguistic representations, and how exactly do they modulate low-level visual features? For reference, Hu et al. (2020, Figure 4) [1] and Hui et al. (2020, Figure 5) [2] both do some of what I'm looking for here, showing the influence of language on the behavior of the model. While the more complex representations used in this model make it more difficult to provide e.g. an easy heatmap, we absolutely need to see an error analysis that helps us believe your claim that language ought to play a role in low-level visual processing. \\n\\nOriginality\\n\\nI don't closely follow the relevant literature and can't speak confidently on the originality of the model. I did have trouble understanding the innovation over Step-ConvRNN, however -- these models seemed within tweaking-distance of one another based on the presentation in this paper. \\n\\n[1]: https://openaccess.thecvf.com/content_CVPR_2020/papers/Hu_Bi-Directional_Relationship_Inferring_Network_for_Referring_Image_Segmentation_CVPR_2020_paper.pdf#page=5\\n[2]: https://arxiv.org/pdf/2010.00515.pdf#page=14\\n\\n## Post-rebuttal update\\n\\nI have read the other reviews and the authors' rebuttals, and do not wish to change my review.\\n\\nI strongly believe that numerical task improvements are not in themselves a conceptual contribution. I look forward to the results of the analyses the authors mention in response to R3-Q2, to better understand what exact interaction between language and low-level visual input is being modeled.\\n\\nAlong with R4 I remain unconvinced of the strength of the empirical results. The authors' response is not helpful here. I can't understand where the numbers (mean 60.74 IoU, std 0.06) come from -- taking stats across table 2 and table 1, I get very different results, so I must be misunderstanding where they come from.\\n\\nSignificance tests would not take too much time -- it's not absolutely critical that you retrain the models for this. You can use data resampling methods instead. For example, on each individual dataset, run bootstrap tests comparing the predictions of your model and others on random resamples of the evaluation data and corresponding predictions.\\nPooling IoU results across datasets within model and then comparing between models can yield misleading results and should be avoided.\", \"rating\": \"2: Strong rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
rvosiWfMoMR | Automatic Music Production Using Generative Adversarial Networks | [
"Giorgio Barnabò",
"Giovanni Trappolini",
"Lorenzo Lastilla",
"Cesare Campagnano",
"Angela Fan",
"Fabio Petroni",
"Fabrizio Silvestri"
] | When talking about computer-based music generation, two are the main threads of research: the construction of $\textit{autonomous music-making systems}$, and the design of $\textit{computer-based environments to assist musicians}$. However, even though creating accompaniments for melodies is an essential part of every producer's and songwriter's work, little effort has been done in the field of automatic music arrangement in the audio domain. In this contribution, we propose a novel framework for $\textit{automatic music accompaniment}$ $\textit{in the Mel-frequency domain}$. Using several songs converted into Mel-spectrograms, a two-dimensional time-frequency representation of audio signals, we were able to automatically generate original arrangements for both bass and voice lines. Treating music pieces as images (Mel-spectrograms) allowed us to reformulate our problem as an $\textit{unpaired image-to-image translation}$ problem, and to tackle it with CycleGAN, a well-established framework. Moreover, the choice to deploy raw audio and Mel-spectrograms enabled us to more effectively model long-range dependencies, to better represent how humans perceive music, and to potentially draw sounds for new arrangements from the vast collection of music recordings accumulated in the last century. Our approach was tested on two different downstream tasks: given a bass line creating credible and on-time drums, and given an acapella song arranging it to a full song. In absence of an objective way of evaluating the output of music generative systems, we also defined a possible metric for the proposed task, partially based on human (and expert) judgment. | [
"music arrangement",
"generative adversarial networks",
"music generation"
] | Reject | https://openreview.net/pdf?id=rvosiWfMoMR | https://openreview.net/forum?id=rvosiWfMoMR | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"DGdn4yX8Cby",
"-EcJaASvot",
"MwuYQ0okIIs",
"sGqet7hSufC",
"OUqoXfqliy",
"tBjdNZSl8n",
"pwtRn2VKyPb"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040431621,
1606154542976,
1606154241954,
1606152146282,
1604321211103,
1603937681181,
1603902304446
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3427/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3427/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3427/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3427/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3427/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3427/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"All Reviewers point out that the paper, although having some strong points, does not meet the bar for a highly-selective machine learning conference like ICLR. Hence, my recommendation is to REJECT the paper. As a brief summary, I highlight below some pros and cons that arose during the review and meta-review processes.\", \"pros\": [\"Well-written paper.\", \"Ambitious task.\", \"Code will be released.\"], \"cons\": [\"Unclear task terminology (music production; misleading title).\", \"Mixed results.\", \"Experimental design could be improved.\", \"Exposition could be improved (technical details missing).\", \"Lack of comparison (for instance with other CycleGAN variants; more experimental setups).\", \"Lack of discussion on the use of a source algorithm for pre-processing data.\"]}",
"{\"title\": \"The reviewer's attention to the signal processing part of our pipeline helped us to gain awareness and to strengthen the presentation of our contribution.\", \"comment\": [\"DEMUCS is by no means perfect, and we should expect some bleed-through of the target signal (eg drums) into the separated signal (eg bass). If this happens, the task becomes significantly easier than if the system was presented with clean stems. $-> \\\\textbf{Our answer}$ This has been clarified with a more in-depth explanation in the section 4.1\", \"We can't rely on previously reported BSS-EVAL metrics to give a sense of DEMUCS' performance on FMA for generating the training data. The FMA dataset is quite different from MusDB in terms of production quality and instrumentation, and given the small size of MusDB, the reported metrics are almost certainly an over-estimate of quality we should expect on FMA. $-> \\\\textbf{Our answer}$ This could surely be the case, and at first we were worried about possible contaminations. Manual inspection of separated drums and bass lines reassured us about the reliability of extracted signals. Both drums and bass contained very little or none bleed-through.\", \"It is not demonstrated that including the FMA data is necessary or beneficial for this task (though it's not unreasonable to expect that this is indeed true). An experiment showing how the system performs if only trained end-to-end on musdb would make the existing results easier to interpret and place in context. $-> \\\\textbf{Our answer}$ Thank you for raising this issue! You are perfectly right, Unfortunately we were not able to conduct an in depth analysis on how much we can shrink the data set without losing quality in the outcomes. It has to be said that this is a very hot topic / open question in the deep learning community.\", \"The biggest omission here is the specific method for recovering the waveform from the generated Mel spectrograms. Phase information is discarded early on in the process, but is critical to the perceptual quality of generated audio. In listening to the included examples, it's pretty clear that there's a great deal of phase distortion in the results of both tasks. (It's less perceptible in the drum synthesis task because the target signal does not generally consist of sustained tones, but it's still audible.) This left me wondering how exactly the phase retrieval is done, and to a lesser extent, how the Mel spectrogram inversion is done. -$-> \\\\textbf{Our answer}$ We agree with the reviewer, we added a detailed explanation of phase retrieval and time domain signal reconstruction in section 3.2\", \"The authors claim that the source separation model (DEMUCS) is time-equivariant (section 3.1), but I don't see how this is justified. DEMUCS uses a U-net architecture with a bidirectional LSTM middle layer, which is not generally time-equivariant. $-> \\\\textbf{Our answer}$ We added the following paragraph: it is worth adding that Demucs shows a nice property for a source separation model, namely it is time--equivariant, meaning that shift in the input mixture will cause a congruent shift in the output. The model does not naturally feature this property, but it is achieved through a workaround (randomized equivariant stabilization) as explained by the original authors.\", \"Why are the spectrograms quantized to 256 values? I agree that this probably doesn't introduce much distortion, but it seems unnecessary. Point of clarification: are these spectrograms using linear magnitude or logarithmic (decibel) magnitude? This decision would have a significant effect on how quantization is performed, but it's not clearly articulated in the paper. Figure 1 suggests a log scaling, but does not provide details. An equation would go a long way here. $-> \\\\textbf{Our answer}$ We added details to clarify this point, by explaining how we transform a signal to a Mel-spectrogram representation, with some formulas. - Moreover, we changed the plot, which was indeed not very clear.\", \"Is there any windowing applied in the short-time Fourier transform (eg Hann or Hamming)? I would expect so based on the lack of transient artifacts in Figure 1, but it's not explicitly stated. I ask because having listened to the provided examples, it sounds like there could be some modulation artifacts in the reconstruction that could be traced to the choice of window function. Aside: if you're using an existing software package to implement your Mel spectrogram, it should be cited. $-> \\\\textbf{Our answer}$ We added all required details in the paper\", \"I like the approach of mapping automatic scores to human judgments, but I'm confused as to why the targets were binarized. Why not do an ordinary least squares or isotonic regression, that would discard less of the information? $-> \\\\textbf{Our answer}$ Thank you for the observation. In principle we did not want to binarize targets, but after a thorough discussion with the evaluators we noticed that generated samples were either not acceptable for production purposes or acceptable. Within the these two groups the differences were not particularly marked.\"]}",
"{\"title\": \"Thanks to the reviewer's clever remarks we were able to address several weaknesses of our work.\", \"comment\": [\"the experimental code is not shared, and the dataset section lacks a few details to reproduce the findings easily. $-> \\\\textbf{Our answer}$: The pipeline has been implemented in PyTorch and all the code will be released upon acceptance to promote reproducibility, we also added all the necessary details to the paper.\", \"The authors could have still used variants of CycleGAN... $-> \\\\textbf{Our answer}$: When writing the paper, due to space concerns and the irrelevance of results, we decided to leave out experiments conducted with the Pix2Pix architecture. We made this point clear in section 4.5\", \"The sources are primarily limited to bass, drums, and vocals... $-> \\\\textbf{Our answer}$: Thanks to the cycle consistency property of CycleGAN, drums2bass samples were automatically generated as well. Nonetheless, we did not add this task because creating a bass line from scratch is outside of our objective. Given drums, the system would not have enough harmonic information to generate a bass accompaniment. This is more of a generative task.\", \"The evaluation and discussion could have more depth... $-> \\\\textbf{Our answer}$: In table 1 we added the correlation matrix for all 4 annotators. We used the Pearson correlation because of the continuous nature of the averaged scores.\", \"The method works on music strictly with drums, bass, and vocals... $-> \\\\textbf{Our answer}$: We already specified this condition in the Introduction actually. We recalled this aspect in Section 3.4, where we describe more in detail the case study presented in the Introduction.\", \"\\\"Nevertheless, only raw audio representation can produce, at least in the long run, appealing results in view of music production for artistic and commercial purpose.\\\" ... $-> \\\\textbf{Our answer}$: The statement was rephrased.\", \"Citing the two papers below could improve the literature review... $-> \\\\textbf{Our answer}$: these references were added.\", \"Please cite FMA and MusDB18 datasets ... $-> \\\\textbf{Our answer}$: done!\", \"The authors only mention that Demucs ... is time equivariant... $-> \\\\textbf{Our answer}$: There may have been a misunderstanding, the authors would not like to state or observe other properties. The text in the relevant section has been changed to make it clearer.\", \"The authors should mention and cite the library they have used to extract MFCCs. $-> \\\\textbf{Our answer}$: done!\", \"It would be beneficial to share IDs of the songs in the subset for reproducibility purposes... $-> \\\\textbf{Our answer}$: done!\", \"The authors should state the number of songs used from the MusDB18 dataset $-> \\\\textbf{Our answer}$: We add these pieces of information in the data set section.\", \"In the test set, instead, we chose only a few samples for each song due to the relative uniformity of its content... $-> \\\\textbf{Our answer}$: sorry, we did not explain ourselves well: we meant that, in the evaluation pipeline, we only converted a random selection of samples from each test song, instead of the whole song.\", \"The authors portrait subjectivity as unfavorable... $-> \\\\textbf{Our answer}$: We rephrased the justification at the beginning of section 4.3.\", \"Section 4.3: In the paper, the authors do not state the cultural background or the genre(s) of the focus of the music experts... $-> \\\\textbf{Our answer}$: In table 1 we added the correlation matrix for all 4 annotators.\", \"What is the distribution of scores for bass and voice? -> They were all considered optimal because the came from high-quality productions.\", \"How much do the artifacts (due to imperfections in source separation) affect the judgments? $-> \\\\textbf{Our answer}$: This is very hard to say and quantify. In the future, we plan to dig deeper into the data quality requirement. For the bass2drums task imperfections in source, separation were virtually absent.\", \"Introduction, Paragraph 1...The phrasing somewhat disregards the music studios. $-> \\\\textbf{Our answer}$: We agree with the reviewer, this statement was too sharp; because of this, we changed it with \\u201cprovide a wide set of tools to manipulate recordings and simplify the composition process for artists and producers\\u201d.\", \"Page 2, top row... It reads like all two-dimensional time-frequency representations are called \\\"Mel-spectrogram\\\"s... $-> \\\\textbf{Our answer}$: We agree with the reviewer, the statement is ambiguous; because of this, we changed \\u201c(known as Mel-spectrogram)\\u201d to \\u201c(in particular, we opted for the Mel-spectrogram time-frequency representation)\\u201d.\", \"The text should explain the relevance of the selected experimental settings... $-> \\\\textbf{Our answer}$: We added a couple of lines to explain the relevance of the selected experimental settings in section 3.4\", \"\\\"Figure 1 shows a Mel-spectrogram example... $-> \\\\textbf{Our answer}$: We rephrased as \\u201cwhich is treated as a single-channel image, representing the sound intensity with respect to time - x-axis - and frequency - y-axis\\u201d.\"]}",
"{\"title\": \"The reviewer made several very thorough comments. We tried to address all of them and we think that, thanks to the reviewer's suggestions, the paper is much stronger now.\", \"comment\": [\"Title + Abstract\", \"The title is misleading... $-> \\\\textbf{Our answer}$: we decided to keep it as it is for two reasons: first, quite often producers do not mix nor master songs since they consider these tasks as post-production. Surely, many others see music production as the whole process, but the issue is debatable. Second, even though we agree with you that the challenge we tackle is more restrictive, we wanted to contextualize it as part of a long-term effort towards completely automatic music production.\", \"\\\"Despite consistent demands from producers and artists...\\\" $-> \\\\textbf{Our answer}$: we rephrase this sentence highlighting the reasons why the automatic arrangement generation is so important for producers, artists, and companies.\", \"\\\"Automatic music arrangement from raw audio in the frequency domain\\\" $-> \\\\textbf{Our answer}$: we changed the sentence to \\u201cautomatic music accompaniment in the Mel-frequency domain\\u201d.\", \"The authors claim that they are the first to treat music audio as images... $-> \\\\textbf{Our answer}$: the main innovations of our contribution are now stated more clearly at the end of the introduction. Moreover, we added the proposed reference.\", \"Introduction\", \"The authors claim that automatic accompaniment in the waveform/frequency domain has many advantages... $-> \\\\textbf{Our answer}$: for more details on the limitations and advantages of the waveform/frequency domain and the main results for the symbolic domain, we modified the \\u201cRelated works\\u201d.\", \"The authors mention that they use the Demucs algorithm for source separation... $-> \\\\textbf{Our answer}$: we added details!\", \"The authors mention the low-computational cost of their proposed method, however, they do not satisfactorily quantify this claim... $-> \\\\textbf{Our answer}$: the claim has been rectified. By low-computational cost, we were referring to the fact that our method is fully parallelizable and does not need to go through the lengthier procedure caused by autoregressive mechanisms used by comparable models in the field like the one cited in the paper.\", \"Related works\", \"The authors do not cite any of the extensive literature on music generation in the symbolic domain... $-> \\\\textbf{Our answer}$: we added to the \\u201cRelated Works\\u201d section many relevant papers dealing with symbolic representation. We did not in the beginning because the two research threads ofter are seen as parallel.\", \"\\\"Nevertheless, only raw audio representation can produce, at least in the long run, appealing results...\\\" $-> \\\\textbf{Our answer}$: We added the following statement to the related section: another very promising approach would be to work with symbolic music and then use state-of-the-art synthesizers to produce sounds. MIDI, music sheets, and piano rolls, however, are not always easy to find or produce. Moreover, many musicians and artists can not real music and would be more comfortable to work in a less formalized setting. Finally, state-of-the-art synthesizers, although increasingly indistinguishable from live recordings, can not yet reproduce the infinite nuances of real voices and instruments. Conversely, raw audio representation could be more appealing for some creators given its flexibility and little music competence required.\", \"Method\", \"There are no details provided about the Demucs algorithm... $-> \\\\textbf{Our answer}$: We added relevant details under section \\u201cSource Separation for Music\\u201d\", \"A reference/citation about the Mel scale... $-> \\\\textbf{Our answer}$: Added relevant citation.\", \"There are no details about the CycleGAN used in the paper... $-> \\\\textbf{Our answer}$: A dedicated section (4.2) was added to answer all of the questions.\", \"Experiments\", \"How was the subset of pop music selected? $-> \\\\textbf{Our answer}$: Thank you for the observation. You can now find the following line in the datasets section\", \"How did the authors arrive at the 4 attributes of quality, euphony, coherence, and intelligibility? $-> \\\\textbf{Our answer}$: To clarify this point we added: the choice fell on these four aspects after a thorough with the evaluators. We asked them to list and describe the most relevant dimensions in evaluating the quality of a piece of pop-rock music.\", \"The features (STOI, FID) $-> \\\\textbf{Our answer}$: To be honest, we do not assume that these features adequately represent the generated audio. Instead, we tried to extract/engineer two sets of features that could predict human judgment. After several attempts, we ended up using a modified version of STOI and FID.\", \"I found the description of the grades and the subsequent comparison in Figure 3 difficult to follow. I think the description needs to be significantly more rigorous. $-> \\\\textbf{Our answer}$: We rephrased the section to better highlight our intentions and how we proceeded. Nonetheless, it is important to stress that these descriptions stem from an ex-post analysis of the results and were not given as guidelines to raters.\"]}",
"{\"title\": \"-The title is misleading. The title claims that the proposed model is for \\\"Automatic Music Production\\\". However the actual task considered is more restrictive. The authors propose a model for automatic accompaniment. Music Production involves many other tasks like mixing, mastering and so on, none of which are a part of this study. The title should therefore be updated to be more specific.\", \"review\": \"This paper proposes a method for automatically generating accompaniments using Mel-spectrograms as inputs to a CycleGAN. Overall I think the paper requires significant revision and additional work before it can be accepted as a conference publication.\", \"abstract\": \"-\\\"Despite consistent demands from producers and artists...\\\": I think this sentence should be rephrased to motivate the need for automatic accompaniment from a different angle. If not, the authors should present some justification for the demand for this technology from artists and producers. \\n\\n-\\\"Automatic music arrangement from raw audio in the frequency domain\\\": why not simply say automatic music arrangement/accompaniment in the Mel-frequency domain? I find the raw audio part of the description unnecessary and confusing. \\n\\n-The authors claim that the they are the first to treat music audio as images and then apply techniques from computer vision. However, treating spectrograms as images is the current standard for many MIR tasks like music transcription, chord recognition and so on e.g. \\\"An end-to-end Neural Network for Automatic Music Transcription\\\": https://ieeexplore.ieee.org/abstract/document/7416164/. There are hundreds of other publications that are similar to this approach.\", \"introduction\": \"-The authors claim that automatic accompaniment in the waveform/frequency domain has many advantages. However they fail to motivate the short-comings of this approach. Namely the lack of source separated training data and the extreme difficulty in source separation for music recordings. It would also be useful to cite a review paper or some of the many publications on automatic accompaniment generation in the symbolic domain so that the reader can find references to this problem which has an extensive literature already. \\n\\n-The authors mention that they use the Demucs algorithm for source separation. However they do not provide any details whatsoever about this approach, especially the downsides. A quick scan of the paper reveals that the algorithm introduces severe artefacts under various conditions. \\n\\n-The authors mention the low-computational cost of their proposed method, however they do not satisfactorily quantify this claim. Firstly, is computational cost an issue? Does this algorithm have to run on a mobile device? Will it be run in a streaming setting? These questions are not answered in the paper.\", \"related_works\": \"-The authors cite many papers on music generation in the waveform domain however they do not cite any of the extensive literature on music generation in the symbolic domain. This literature is extremely relevant to the work presented in this paper. \\n\\n-\\\"Nevertheless, only raw audio representation can produce, at least in the long run, appealing results in view of music production for artistic and commercial purposes.\\\" Why is this the case? Why is generating music in the symbolic domain and then using state-of-the-art synthesisers not an appealing direction? This point isn't made clear in the paper.\", \"method\": \"-There are no details provided about the Demucs algorithm used to separate the source training data into various channels like vocal, bass, drums etc. How big was the model? Did the authors train the model themselves? Did they use a pre-trained model? Were there any artefacts present in the source separated tracks? Are there any downsides to this algorithm? Are there any alternatives to this algorithm? Do the artefacts not interfere with the downstream task? \\n\\n-A reference/citation about the Mel scale would be useful. \\n\\n-There are no details about the CycleGAN used in the paper. How big is the model? What is the architecture? How was it trained? What flavour of gradient descent was used for training? What are the hyper-parameters? Was the model trained on a single GPU?\", \"experiments\": \"-How was the subset of pop music selected? How was the metadata filtered to obtain the 10000 tracks used for training? If the filtering algorithm cannot be outlined, then it would be useful to provide a list of the 10000 tracks used for training, for the purpose of reproducibility. \\n\\n-How did the authors arrive on the 4 attributes quality, euphony, coherence and intelligibility? Is there some theory that suggests that these 4 attributes would be useful in determining whether the accompaniment is somehow good? These attributes have been presented without justifications and citations. \\n\\n-The features (STOI, FID) used to compare the automatically generated accompaniment have also been presented without much justification. Why is it that these features are an adequate representation of the generated audio? \\n\\n-I found the description of the grades and the subsequent comparison in Figure 3 difficult to follow. I think the description needs to be significantly more rigorous.\", \"rating\": \"2: Strong rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Promising directions but the study needs to be extended\", \"review\": \"In the paper, the authors adapt CycleGAN, a well-known model for unpaired image-to-image translation, to automatic music arrangement by treating MFCCs extracted from audio recordings as images. Also, the authors propose a novel evaluation metric, which learns how to rate generated audio from the ratings of (some) music experts. The authors make use of two large-scale datasets to train and evaluate the model on two scenarios, namely 1) generating drum accompaniment a given bass line, 2) generating arrangement given a voice line. They report promising results on the first task; however, the model is not as successful on the second (more challenging) task.\\n\\nThe problem is challenging, and meaningful solutions may bring innovative and creative solutions to music production. The literature is well-covered, with a few missing citations (see below). The approach is built upon existing work, and the experiments are conducted on two relevant, public datasets. On the other hand, the experimental code is not shared, and the dataset section lacks a few details to reproduce the findings easily.\", \"below_are_the_shortcomings_of_the_paper\": \"1. While adapting past music generation work for arrangement generation is not trivial, the authors could have still used variants of CycleGAN and other unpaired image-to-image translation models for comparison.\\n2. The sources are primarily limited to bass, drums, and vocals. I do not think the narrow scope is an issue on a paper focusing on an unexplored subject. On the contrary, the experiments could have more variety, e.g. drums2bass, bass&vocals2drums, and other combinations, so that we could examine which settings bring interesting and/or challenging outcomes in arrangement generation.\\n4. The evaluation and discussion could have more depth, e.g. inter-annotator agreement, the effect of source separation in the generated audio (separation errors, audible artifacts, ...)\\n\\nThe paper is novel in its application and brings promising results. However, the authors should extend the experiments, compare relevant models against each other, and discuss the results more in detail. Therefore, I would strongly encourage the authors to build upon their existing work and re-submit the revised paper to ICLR or another conference such as ISMIR.\\n\\nSpecific comments\\n=================\\n\\n- As mentioned above, the authors should have added more \\\"experimental settings.\\\" At least they should have included \\\"generation of a bass line given the drums\\\" (reverse of bass2drums) because 1) it would have allowed the readers to contrast the performance with bass2drums, 2) the task would be closer to the real-world use case (drums are typically the first to be recorded in a session followed by bass).\\n\\n- The method works on music strictly with drums, bass and vocals, which is not mentioned until Section 3.4. This limitation/condition should be specified clearly and earlier in the Introduction and/or in Section 3.1.\\n\\n- \\\"Nevertheless, only raw audio representation can produce, at least in the long run, appealing results in view of music production for artistic and commercial purpose.\\\"\\n\\n Even if we restrict ourselves to popular music, this argument is too ambitious if not misleading. Many artists (performers, composers, conductors, etc.) are not only well fledged but - by profession - required to appreciate music by reading sheet music. Countless programmable interfaces and software, which make use of symbolic/hybrid music representations but do not generate raw audio directly, have been used extensively as part of music performances and production in a both artistic and commercial setting. While audio - without any doubt - is the essence of music, we can never disregard other representations.\\n\\n- Citing the two papers below could improve the literature review:\\n\\n >Hawthorne, Stasyuk, Roberts, Simon, Huang, Dieleman, Elsen, Engel and Eck, \\\"Enabling Factorized Piano Music Modeling and Generation with the MAESTRO Dataset\\\", International Conference on Learning Representations, 2019. => similar to the authors' design decision, this paper uses a cheaper intermediate representation (music scores) for efficiency\\n\\n >Donahue et al. LakhNES: Improving multi-instrumental music generation with cross-domain pre-training => the paper involves mapping (\\\"arranging\\\") the instrumentation in MIDI files to NES sound channels.\\n\\n- Please cite `FMA` and `MusDB18` datasets following the instructions in the respective online sources.\\n\\n- Section 3.1. \\\"While showing nice properties,\\\"\\n\\n The authors only mention that Demucs solve audio source separation (for the data the authors use) and the algorithm is time equivariant. However, the text reads like the authors would like to state other properties as well. If there are others, they should be stated explicitly.\\n\\n- Section 3.2.\\n\\n The authors should mention and cite the library they have used to extract MFCCs.\\n\\n- Section 4.1 \\\"we chose to select only pop music and its sub-genres for a total of approximately 10,000 songs\\\"\\n\\n It would be beneficial to share IDs of the songs in the subset for reproducibility purposes. Also, the authors do not state whether they use the untrimmed or trimmed versions of the tracks in the FMA dataset, which is a crucial detail for model training as well as experimental reproducibility.\\n\\n- The authors should state:\\n\\n 1. number of songs used from the MusDB18 dataset (i.e. have they used both the train and test splits?)\\n 2. Total duration and number of samples in training, test and fine-tuning\\n\\n- In the test set, instead, we chose only a few samples for each song due to the relative uniformity of its content: in other word, we expect our model to perform in similar ways on different parts of the same song.\\n\\n I find this assumption a bit unrealistic. In what sense, is the content uniform across the song? Is it uniformity in mixing, structure, arrangement, melody, tempo, or rhythm? Even if the authors use trimmed audio excerpts for training/testing, these characteristics can vary substantially within seconds (even if they use trimmed tracks). \\n\\n The authors should clearly state how they define content uniformity, provide a more informed argument around this assumption and experimentally show that the assumption holds for the test set.\\n\\n- Section 4.2: \\\"the result is somehow subjective thus different people may end up giving different or biased ratings based on their personal taste\\\"\\n\\n The authors portrait subjectivity as unfavourable. However, - as a human construct - there are no objective, universal criteria for appreciating music. Likewise, the evaluation metric, which the authors are proposing, is based on the subjective responses from music experts. I think the justification needs rephrasing.\\n\\n- Section 4.3: In the paper, the authors do not state the cultural background or the genre(s) of the focus of the music experts. The inter-agreement between the experts are not presented either. Due to lack of information and the small number of subjects, it is difficult to assess whether the (trained) evaluation metric has positive/negative/desired biases based on the experience, knowledge, personal taste etc. of the experts. Therefore, the claim about the proposed \\\"metric correlating with human judgment\\\" is a bit weak.\\n\\n- What is the distribution of scores for bass and voice?\\n\\n- How much do the artifacts (due to imperfections in source separation) affect the judgements?\\n\\nMinor comments\\n==============\\n\\n- Introduction, Paragraph 1: \\\"allow artists and producers to easily manipulate recordings and create high quality songs directly from home.\\\"\\n\\n The phrasing somewhat disregards the music studios.\\n\\n- Page 2, top row: \\\"given a musical sample encoded in a two-dimensional time-frequency representation (known as Mel-spectrogram)\\\"\\n\\n It reads like all two-dimensional time-frequency representations are called \\\"Mel-spectrogram\\\"s, instead of the authors using Mel-spectrograms, which is one type of two-dimensional time-frequency representations. \\n\\n- The text should explain the relevance of the selected experimental settings to the music production: e.g. drums and bass are usually the first \\\"sessions\\\" to be recorded; a demo typically consists of the melodic prototype/idea with minimal accompaniment, which is later arranged by many collaborators...\\n\\n- \\\"Figure 1 shows a Mel-spectrogram example, a visual representation of a spectrum, where the x axis represents time, the y axis represents the Mel bins of frequencies and the third gray tone axis represents the intensity of the sound measured in decibel (Briot et al., 2020).\\\"\\n\\n I do not understand what the authors mean by \\\"third gray tone axis.\\\" Is it because the MFCCs are treated as a single channel image, hence \\\"gray\\\"? If yes, it is better to state that the \\\"MFCCs are treated as a single channel image\\\" without resorting to image processing jargon.\\n\\n- \\\"Mel-frequency cepstral coefficients are the dominant features used in speech recognition, as well as in some music modeling tasks (Logan & Robinson, 2001)\\\"\\n\\n It may be better to introduce this sentence earlier in the paragraph.\\n\\n- Section 3.4: \\\"On the one hand, ... On the other hand\\\"\\n\\n It might be easier to read if the setting is enumerated for readability.\\n\\n- Section 4.1: \\\"To train and test our model We decide\\\"\\n\\n Lowercase \\\"We\\\" -> \\\"we\\\"\\n\\n- MusDB18 URL is broken\\n\\n- Section 4.3: \\\"Time: a rating from 1 to 10 of whether the produced drums and arrangements are on time the the bass and voice lines\\\"\\n\\n Double \\\"the the\\\" -> \\\"with the\\\"\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting study, but needs more development\", \"review\": \"This paper describes an approach to what is often termed \\\"automatic accompaniment\\\" generation in music.\\nGiven an input signal of one source (e.g., vocal or bass), the system is trained to generate one or more accompanying signals (drums, full arrangement).\\nThe authors propose a CycleGAN model to learn transformations between source and accompaniment domains.\\nThe model is trained on a combination of a large collection of automatically separate signals (FMA) and a small collection of isolated stem recordings (musdb).\\nThe models for two tasks (bass->drum and vocal->full) were evaluated by a combination of human listener testing and automatic offline scoring, to somewhat mixed results.\\n\\nOverall, I found this paper interesting and generally well-written.\\nThe combination of subjective and offline evaluation was nice to see, and given the inherent difficulty of the problem, I don't consider the mixed results to be a total negative here.\\nThat said, I do think there are areas in which this paper could be significantly improved, both in terms of experimental design and exposition.\\n\\n\\nThe experiments presented here make use of both pre-separated stems (MusDB) and automatically separated signals produced by DEMUCS on the FMA dataset.\\nGiven the size of available stem datasets, I understand the motivation for going this route.\\nHowever, I think there needs to be some quantitative evaluation of the impact of each part here, for several reasons:\\n\\n1. DEMUCS is by no means perfect, and we should expect some bleed-through of the target signal (eg drums) into the separated signal (eg bass). If this happens, the task becomes significantly easier than if the system was presented with clean stems.\\n2. We can't rely on previously reported BSS-EVAL metrics to give a sense of DEMUCS' performance on FMA for generating the training data. The FMA dataset is quite different from MusDB in terms of production quality and instrumentation, and given the small size of MusDB, the reported metrics are almost certainly an over-estimate of quality we should expect on FMA.\\n3. It is not demonstrated that including the FMA data is necessary or beneficial for this task (though it's not unreasonable to expect that this is indeed true). An experiment showing how the system performs if only trained end-to-end on musdb would make the existing results easier to interpret and place in context.\\n\\n\\nIn terms of exposition, as stated above, I find the paper mostly clear and easy to follow.\\nHowever, many technical details are omitted that make it both impossible to reproduce and difficult to interpret.\\nThe biggest omission here is the specific method for recovering the waveform from the generated Mel spectrograms.\\nPhase information is discarded early on in the process, but is critical to the perceptual quality of generated audio.\\nIn listening to the included examples, it's pretty clear that there's a great deal of phase distortion in the results of both tasks.\\n(It's less perceptible in the drum synthesis task because the target signal does not generally consist of sustained tones, but it's still audible.)\\nThis left me wondering how exactly the phase retrieval is done, and to a lesser extent, how the Mel spectrogram inversion is done.\", \"minor_comments\": [\"The authors claim that the source separation model (DEMUCS) is time-equivariant (section 3.1), but I don't see how this is justified. DEMUCS uses a U-net architecture with a bidirectional LSTM middle layer, which is not generally time-equivariant.\", \"Why are the spectrograms quantized to 256 values? I agree that this probably doesn't introduce much distortion, but it seems unnecessary. Point of clarification: are these spectrograms using linear magnitude or logarithmic (decibel) magnitude? This decision would have a significant effect on how quantization is performed, but it's not clearly articulated in the paper. Figure 1 suggests a log scaling, but does not provide details. An equation would go a long way here.\", \"Is there any windowing applied in the short-time Fourier transform (eg Hann or Hamming)? I would expect so based on the lack of transient artifacts in Figure 1, but it's not explicitly stated. I ask because having listened to the provided examples, it sounds like there could be some modulation artifacts in the reconstruction that could be traced to the choice of window function. Aside: if you're using an existing software package to implement your Mel spectrogram, it should be cited.\", \"I like the approach of mapping automatic scores to human judgments, but I'm confused as to why the targets were binarized. Why not do an ordinary least squares or isotonic regression, that would discard less of the information?\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
tV6oBfuyLTQ | Parameter-Based Value Functions | [
"Francesco Faccio",
"Louis Kirsch",
"Jürgen Schmidhuber"
] | Traditional off-policy actor-critic Reinforcement Learning (RL) algorithms learn value functions of a single target policy. However, when value functions are updated to track the learned policy, they forget potentially useful information about old policies. We introduce a class of value functions called Parameter-Based Value Functions (PBVFs) whose inputs include the policy parameters. They can generalize across different policies. PBVFs can evaluate the performance of any policy given a state, a state-action pair, or a distribution over the RL agent's initial states. First we show how PBVFs yield novel off-policy policy gradient theorems. Then we derive off-policy actor-critic algorithms based on PBVFs trained by Monte Carlo or Temporal Difference methods. We show how learned PBVFs can zero-shot learn new policies that outperform any policy seen during training. Finally our algorithms are evaluated on a selection of discrete and continuous control tasks using shallow policies and deep neural networks. Their performance is comparable to state-of-the-art methods. | [
"Reinforcement Learning",
"Off-Policy Reinforcement Learning"
] | Accept (Poster) | https://openreview.net/pdf?id=tV6oBfuyLTQ | https://openreview.net/forum?id=tV6oBfuyLTQ | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"26sJLzrNS4",
"dwYGWH60GYY",
"LomGK7wphhT",
"Wli947Qy_Lz",
"bHhFSkP0-Zc",
"tofpwUP1xo4",
"Cyt3xM2FXOU",
"Zla6VsFdTM",
"SpIjtO37Fw",
"oauto5hltgE",
"d-Tsd1w0dj",
"980_hAiVQHp",
"m9JAbFwlTF",
"8zIQ3hpLJl",
"Sfpqx48_sLO",
"JTEltzIuyy0",
"Os6JEvI3MJ3",
"Y3B__Pbf8zI",
"kXgBOyJEJQG",
"SngPqjOLQ99",
"qigPvtUr-z_",
"jsG95bnyqkJ",
"_0J73CTAQpa"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040361030,
1606300221585,
1606299280002,
1606185054111,
1606172315068,
1606165508763,
1606143299625,
1606139976301,
1606139461275,
1606123855579,
1606123175342,
1606050168894,
1605987910268,
1605987849033,
1605987591485,
1605987472380,
1605982951319,
1605981944245,
1605981117141,
1603954617550,
1603941156959,
1603614553148,
1603594379590
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3426/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3426/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3426/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3426/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3426/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3426/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3426/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3426/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3426/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3426/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3426/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3426/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3426/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3426/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3426/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3426/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3426/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3426/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3426/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3426/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3426/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3426/AnonReviewer4"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Poster)\", \"comment\": \"The reviewers generally found the idea interesting and the contribution of the paper significant. I agree, I think this is quite a neat idea to investigate, and the paper is written well and is engaging to read.\\n\\nI would encourage the authors to take into account all of the reviewer suggestions when preparing the camera-ready version. Of particular importance is the name: I think it's bad form to appropriate a name already used in other prior work (proto-value functions, which are very well known in the RL community), so I think it is very important for the final to change the name to something that does not conflict with an existing technique. Obviously this does not affect my evaluation of the paper, but I trust that the authors will address this feedback (I will check the camera-ready).\"}",
"{\"title\": \"Summary of the updates in the revision\", \"comment\": [\"Thanks again for the in-depth feedback by the reviewers during this rebuttal period.\", \"We have submitted an improved version of our paper. Below we provide a summary of the most important changes.\", \"We changed the problem formulation, using the limiting distribution under the behavioral policy $d_{\\\\infty}^{\\\\pi_{\\\\theta}}(s) = \\\\lim_{t \\\\rightarrow \\\\infty} P(s_t = s| s_0, \\\\pi_b)$ instead of the discounted weighting of states $d^{\\\\pi_{\\\\theta}}(s') = \\\\int_{\\\\mathcal{S}}\\\\sum_{t=1}^{\\\\infty} \\\\gamma^{t-1} \\\\mu_0(s) P(s \\\\rightarrow s', t, \\\\pi_{\\\\theta}) \\\\mathrm{d} s$ in the off-policy objective function. In practice, we do not have access to samples of $d_{\\\\infty}^{\\\\pi_{\\\\theta}}(s)$, which is approximated by sampling trajectories.\", \"We included an ablation to test whether removing $\\\\nabla_{\\\\theta}Q(s,a,\\\\theta)$ from the PAVF's gradient affects the performance. We obtained a significant decrease in return in the Swimmer environment with shallow and deep policies. In Hopper the performance also dropped, but less significantly.\", \"We expanded the connection between our methods and PENs [1] in the main paper and we compared our work with more algorithms trying to solve the problems of off-policy RL. We included many comparisons between our approach and traditional methods [2,3] in Sections 1,2,3 and in the related work section.\", \"We included results with PSSVF and PSVF using stochastic shallow and deep policies in all environments. The results are sometimes inferior to those with deterministic policies, but can still outperform the baselines in some environments. Although the use of stochastic policies can help smoothing the objective function and allows the agent exploring in action space, we believe that the lower variance provided by deterministic policies can facilitate learning PVFs.\", \"We tested PSVF and PAVF in LQR and included visualization for $V(s_0, \\\\pi_{\\\\theta}(s_0))$ and $Q(s_0, \\\\pi_{\\\\theta}(s_0),\\\\theta)$ over the policy space. The goal of this experiment was to assess whether PVFs can learn the underlying structure in the parameter space and from the results it seems that PSVF and PAVF are able to effectively bootstrap the values of future states.\", \"We included more zero-shot learning results using PSSVF, PSVF and PAVF with shallow and deep policies in different environments. We obtained results similar to those in the main paper when using shallow policies and we discussed possible reasons why deep policies fail in this task when using complex environments.\", \"[1]Jean Harb, Tom Schaul, Doina Precup, and Pierre-Luc Bacon. Policy evaluation networks.arXivpreprint arXiv:2002.11833, 2020\", \"[2]Thomas Degris, Martha White, and Richard S. Sutton. Off-policy actor-critic. In Proceedings of the 29th International Conference on International Conference on Machine Learning, ICML\\u201912, pages 179\\u2013186, USA, 2012.Omnipress.\", \"[3]David Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and Martin Riedmiller. Deterministic policy gradient algorithms. In Proceedings of the 31st International Conference on International Conference on Machine Learning - Volume 32, ICML\\u201914, pages I\\u2013387\\u2013I\\u2013395.JMLR.org, 2014.\"]}",
"{\"title\": \"We agree that OffPAC is a principled method for tabular policies\", \"comment\": \"We agree that OffPAC is an important work and that it is theoretically sound when using tabular policies. We mentioned in the paper that it converges for tabular policies and we stated more clearly that their gradient is biased for the objective they introduce.\\n\\n>Equations 1 and 2 are repeated in 8 and 9.\\n\\nIn Eq. 8 and 9 we provided the on-policy policy gradient for PVFs and we proved it in Appendix 2. The proof is trivial but we included it with the on-policy formulation for completeness. We simply showed that $\\\\nabla_{\\\\theta}Q(s,a,\\\\theta)$ can be written in terms on $d^{\\\\pi_{\\\\theta}}$, exactly like in the original proofs.\"}",
"{\"title\": \"Theoretical advantage is still not certain, but I agree that the idea has merit\", \"comment\": \"I still think that the convergence to the optimal solution in the tabular case for the standard OffPAC provides a good theoretical grounding for the algorithm. And in the non-tabular case what matters is which algorithm will lead to better performance on the original objective $J$, and from the theory it is not that clear which one will be better (indeed the standard OffPAC is widely used, and seems to give decent performance). I think it should at least be mentioned in the paper that the original OffPAC proved convergence in the tabular case even when ignoring $\\\\nabla_\\\\theta Q$. But basically, I agree that it is an interesting question to see what effect the $\\\\nabla_\\\\theta Q$ term has, so I will update my score accordingly.\", \"some_more_notes_about_the_current_draft\": \"\\\"while traditional off-policy actor-critic methods introduce off-policy policy gradients that are only biased approximations to the true gradient since they do not estimate or compute the gradient of the action-value function with respect to the policy parameters\\u2207\\u03b8Q\\u03c0\\u03b8(s,a)(Degris et al., 2012; Silver et al., 2014)\\\"\\n\\nThis should probably say that it is an approximation to the true gradient of the OffPAC objective, otherwise it may be misleading.\\n\\nEquations 1 and 2 are repeated in 8 and 9.\\n\\nProbably it should be mentioned that dropping the $\\\\nabla_\\\\theta Q$ term was justified in previous work by showing that the algorithm will still converge in the tabular setting (but you can argue why you think your approach is better).\"}",
"{\"title\": \"Clarification about related work\", \"comment\": \"> We believe that it is important to mention a connection with synthetic gradients[1,2], because although they focus on supervised learning tasks, they include the possibility of learning maps from policies activations to gradients, losses or cumulative rewards, which is a setting similar to ours.\\n\\nThe main issue is that including this barely-related work (different setting, different quantity being predicted, different motivations, etc.) has relegated an in-depth discussion of the most closely-related work (PENs) to the appendix. I strongly suggest removing the following chunk, or at the very least moving it to an appendix to create room for the discussion of PENs:\\n\\n> In 1990, adaptive critics trained by TD were used to predict the gradients of an RNN from its activations (Schmidhuber, 1990), avoiding backpropagation through time (BPTT) (Werbos, 1990). This idea was later used to update the weights of a neural network asynchronously (Jaderberg et al., 2017). In our work, the critic is predicting errors instead of gradients. If applied to POMDPs, or supervised learning tasks involving long time lags between relevant events, the PSSVF could avoid BPTT by viewing the parameters of an RNN as a static object and mapping them to their loss (negative reward).\"}",
"{\"title\": \"Possible theoretical advantage\", \"comment\": \"It is true that in the approximate case the optimum of $J_b$ will not necessarily correspond to the optimum of $J$. In general, it is also not guaranteed to find the global maximum of $J_b$ at all, as the objective might be highly non-convex. What one can hope is to prove that an algorithm converges to a point which corresponds to one of the local maxima of $J_b$. The short answer and main argument to why we believe PVFs offer a theoretical advantage is that they optimize for $J_b$, while we do not know what Off-PAC (Degris et al.) or DPG (Silver et al.) are optimizing for, since their gradient is not the gradient of any objective. Therefore, if we are able to prove that in the actor-critic algorithm with PAVF the actor converges, it necessarily converges to a local optimum of $J_b$.\\n\\nDegris et al. tried to prove that the actor converges to a local optimum of $ J_b$ in their Th 3, using stochastic approximation arguments. The theorem shows that, under some assumptions, the policy parameters converge to a point such that the approximate gradient of $J_b$ is zero. Unfortunately, in the non-tabular setting it is not guaranteed that this will be also a local maximum for $J_b$. With PAVF some of the conditions in the stochastic approximations (Borkar, 2009) must be modified: Q would be linear in some feature of the states and in the policy parameters; since Q receives $\\\\theta$ as input, we would work in a continuous setting; $GTD(\\\\lambda)$ should be extended to PAVF. Under these modified conditions plus the standard stochastic approximation conditions, we believe it would be possible to follow a similar approach to Th 3 in Degris et al. in order to satisfy the necessary conditions that ensure the actor to converge to a local optimum of the gradient it is optimizing for. For PAVF this would be a local optimum of $J_b$. Note that here there would be no distinction between tabular or non-tabular policies: since we are optimizing $J_b$ while following the true gradient, the only requirement on the policy is that it satisfies some assumptions on stochastic approximation and we do not need further assumptions (e.g. tabular policy) for the gradients to match. \\n\\nIn this work we did not try to prove convergence of PVFs (not even for tabular policies), as many of the assumptions needed are far from what we use in the experiments (linear value function, direct samples from $d_{\\\\infty}^{\\\\pi_b}$). However, we believe that, since our PAVF has access to the true gradient of $J_b$, it will converge to a local optimum of $J_b$. In future work, we plan to formally prove it and to compare the performance of PAVF, Off-PAC, DPG and other off-policy actor-critic methods under linear value-function approximation. \\n\\nVivek S Borkar.Stochastic approximation: a dynamical systems viewpoint, volume 48. Springer,2009.\"}",
"{\"title\": \"In the non-tabular function approximation case\", \"comment\": \"In the function approximation case, with a space of realizable policies $C$, you can only find the optimal policy for $J_b$ contained in $C$. This optimal policy will not necessarily coincide with the optimal policy for $J$ nor with the optimal policy for $J$ contained within $C$. Therefore, I am not convinced that your approach has a theoretical advantage, as both your approach and Degris' approach converges to the optimal policy for $J$ in the tabular case, and neither of the approaches is guaranteed to converge to the optimal policy in the non-tabular function approximation case.\\n\\nHowever, I am no longer concerned about the newly proposed policy gradient theorems, which lifts my main concern.\"}",
"{\"title\": \"A more detailed comparison with Off-PAC\", \"comment\": \"We provided clarifications in the answer above regarding a comparison of the true and approximate gradient of $J_b$. Below we extend them based on the more recent comments by the reviewer.\\n\\nWe agree with the reviewer that the maximum of $J$ is the maximum of $J_b$, assuming we are in a markovian setting and the support of $\\\\mu_0$ is included in the support of $d_{\\\\infty}^{\\\\pi_b}$. We stated this explicitly in the paper.\\n\\n>In the no function approximation setting, Degris' work actually proved that even if you omit the $\\\\nabla_{\\\\theta}Q$\\n term, the policy gradient algorithm will converge to the optimal solution of $J_b$\\n (this also requires that $d^b$\\n will visit all states in $d^{\\\\pi}$\\n), so adding in this extra term is not necessary to solve the objective, and it's not clear that adding it in will improve the performance (see Theorems 1 and 2 in Degris).\\n\\nUnfortunately, the proofs by Degris et al. (https://arxiv.org/pdf/1205.4839.pdf) work only if the policy is tabular (i.e. with different weights for each state). The errata in Appendix B in Degris et al. states \\\"The current theoretical results only apply to tabular representations for the policy $\\\\pi$ and not necessarily to function approximation for the policy\\\". Our results, instead, are valid for general policy parametrization.\\n\\nIt seems to us that our policy gradients are more sound than those in Degris et al., Silver et al., due to these considerations: following the exact gradient of $J_b$, if we can obtain the maximum of $J_b$, under covering assumptions this will match the maximum of $J$. On the other hand, Degris et al. (probably the same argument holds in Silver et al.) for non-tabular policies claim: \\\"Because the approximate gradient is not\\nthe gradient of any objective function, it is not clear if any stable minima exist\\\" (errata in Appendix B in Degris et al). Moreover, the policy improvement theorem for $J_b$ holds only in the tabular setting (or in the on-policy case), so it is not guaranteed that $J_b$ is maximized when following the approximate gradient. Nevertheless, as pointed out in the previous response, it might happen in some cases that the approximate gradient is closer to the gradient of $J$ than the true gradient of $J_b$ (at least in the near-on-policy case).\"}",
"{\"title\": \"Problem formulation and J vs J_b\", \"comment\": \"We thank the reviewer for their prompt reply.\\n\\nWe changed our problem formulation such that, like in the original formulation by Degris et al., we use the limiting distribution of the states under the behavioral policy. We called this term $d_{\\\\infty}^{\\\\pi_b}$ in order to distinguish it from the discounted weighting of states $d^{\\\\pi_b}$. We updated the theoretical results accordingly. In practice, we do not have access to $d_{\\\\infty}^{\\\\pi_b}$, so in the derivation of the algorithms and in the experiments we approximate it by sampling trajectories generated by the behavioral policy. This is the same formulation and approximation done in ACER (Wang et al).\\n\\n>Reiterating my main argument, a simple sanity check for any off-policy policy gradient theorem is whether the theorem gives an unbiased gradient estimator in the on-policy case. This sanity checks works for the previous gradient theorems by Degris et al and Silver et al, which are unbiased in the on-policy case, but not for the method proposed in this paper; \\n\\n\\n\\nWe agree with the reviewer that there is confusion in the literature about the off-policy objective. One main question is the following. Let $J_b$ be the practical off-policy objective that is widely used (Degris et al, Imani et al., Wang et al.) and let $J$ be the true RL objective. Let $\\\\tilde \\\\nabla_{\\\\theta}J_b(\\\\pi_{\\\\theta})$ be the approximate gradient of $J_b$ provided by Degris et al. and let $\\\\nabla_{\\\\theta}J_b(\\\\theta)$ be the true gradient of $J_b$ (ours, Imani et al.). Under which conditions is $|\\\\nabla_{\\\\theta}J(\\\\pi_{\\\\theta}) - \\\\tilde \\\\nabla_{\\\\theta}J_b(\\\\pi_{\\\\theta})|<|\\\\nabla_{\\\\theta}J(\\\\pi_{\\\\theta}) - \\\\nabla_{\\\\theta}J_b(\\\\theta) |$? In other words, when is the approximate gradient for $J_b$ a better direction of improvement for the original problem than the true gradient for $J_b$? If we are on-policy, we agree with the reviewer that clearly the approximate gradient is better, because it simply reverts to the on-policy policy gradient. However, when we are off-policy, this is an open question. In particular, for a single state the answer depends on $d_{\\\\infty}^{\\\\pi_b}(s)$, $d_{\\\\infty}^{\\\\pi_{\\\\theta}}(s)$ and $\\\\mu_0(s)$, while in expectation it depends also on $Q$, $\\\\pi_{\\\\theta}$ and $\\\\gamma$. Finding a solution to this problem is beyond the scope of this work and we acknowledge that in the off-policy setting it is not clear if it is better to use the true gradient of $J_b$ (like ours and Imani et. al.) or following an approximate approach (Degris et al., Wang et al.). Our ablation suggests that, in the Swimmer environment, using $\\\\nabla_{\\\\theta}J_b(\\\\theta)$ instead of $\\\\tilde \\\\nabla_{\\\\theta}J_b(\\\\pi_{\\\\theta})$ is better when evaluating $J$. Note that works trying to find the true gradient for $J_b$ like ours and Imani et al. will necessarily have an on-policy policy gradient which is biased because the objective functions $J_b$ and $J$ are not matching everywhere. \\n\\n>I'm afraid I am voting to reject the paper unless the newly proposed theorems are removed, or the problem setting is changed so that the new theorems would correspond to the problem setting (as described in my original review). Perhaps another way I may be able to accept is if the paper were to admit that probably it is incorrect to add the extra term, but they choose to add it anyway to explore the idea (it is true that there is some confusion in the literature about this). However, currently the paper claims that the new theorems are more correct than the previous ones, and I don't find that appropriate.\\n\\nWe revised some claims in the paper and acknowledged that our gradients, despite being exact for $J_b$, might be worse than the approximate gradients for maximizing $J$.\\n\\n>Regarding the references I provided for correcting the distribution shift [...]. However, the main point of these works was that off-policy corrections can be performed by chaining importance weights together over the whole trajectory. \\n\\nWe agree with the reviewer that in trajectory-based off-policy RL one can correct for the distribution shift of the entire episode by taking the product of importance weights over the trajectory. Like in the works suggested by the reviewer, it would be possible to use multi-step estimation of the action-value function $Q(a,s,\\\\theta)$ and techniques like Retrace could help to reduce the variance.\"}",
"{\"title\": \"LQR task\", \"comment\": \"\\\"The purpose of our LQR experiment was to show how\\nis able to approximate over the policy parameter space. We did not compare this with and because in order to represent these value functions in a 2D plot we would have to remove the bias from the policy, hence the comparison would not be fair. \\\"\\n\\nYou can evaluate $V(s, \\\\theta)$ and $Q(s,a,\\\\theta)$ on the LQR task as a function of only $\\\\theta$ by taking the expectation over $\\\\mu(s)$. E.g. sample many points from $\\\\mu$ take the average of $V(s,\\\\theta)$ for a particular $\\\\theta$ and show that this average has a sensible value (by plotting it, and comparing).\"}",
"{\"title\": \"Changed my mind a bit\", \"comment\": \"I read a bit more literature, and one argument I came across for the $J_b$ objective is the following:\\n\\nIf $d^{\\\\pi_b}$ places probability mass everywhere where $\\\\mu_0$ places probability mass, then the optimal solution to\\n$\\\\int d^{\\\\pi_b}(s) V_\\\\theta(s)ds$ will also be optimal for the original objective $\\\\int \\\\mu_0(s) V_\\\\theta(s)ds$, because the optimal policy is optimal independent of the start state distribution. However, this argument will break down when function approximation is used. From this point of view, I agree that it is also reasonable to add the $\\\\nabla_\\\\theta Q$ term; however, it is still not clear whether it is theoretically better than the theorems in Degris and Silver, because of the other arguments I made, and for several other reasons that I will explain below. But I agree that experimentally evaluating the method with $\\\\nabla_\\\\theta Q$ is a valid research question, and if the discussion around the different theorems is good, I may like the paper.\\n\\nI list the main reasons for why it is not clear that the new theorems are better (not in the function approximation setting, nor in the setting with no function approximation):\\n- As I explained,the policy gradient from Degris and Silver appears theoretically closer to the gradient for the true on-policy objective, because it only omits the importance weights for the distributions, whereas the new method differs by both the importance weights, and the gradient (so, in the function approximation setting, it is not clear, which method would be more appropriate for finding a solution that works well on the original RL objective).\\n- In the no function approximation setting, Degris' work actually proved that even if you omit the $\\\\nabla_\\\\theta Q$ term, the policy gradient algorithm will converge to the optimal solution of $J_b$ (this also requires that $d^b$ will visit all states in $d^\\\\pi$), so adding in this extra term is not necessary to solve the objective, and it's not clear that adding it in will improve the performance (see Theorems 1 and 2 in Degris).\\n\\nAnother work on solving the distribution shift problem is AlgaeDice (https://arxiv.org/pdf/1912.02074.pdf), and they have some good discussion on these topics.\\n\\n\\nIn conclusion, what I think should be explained in the paper is: 1) The solution to $J_b$ would also be optimal for the original objective as long as $d^b$ visits all states in $\\\\mu_0$. 2) Explain the pros and cons of adding or omitting the extra $\\\\nabla_\\\\theta Q$ term, and perform experimental analysis to try to solve the debate. It may be nice to find some simple toy example where adding in the extra $\\\\nabla_\\\\theta Q$ term will lead to a better solution (off the top of my head, your theorem would require that $d^b$ visits all states in $\\\\mu_0$ whereas Degris or Silver additionally require that $d^b$ visits all states in $d^\\\\pi$, so potentially you could show an example where your method beats theirs when the second condition is not met. However, I guess this will also require accurately learning Q in the states not visited by $d^b$, so I am not sure whether this is feasible).\"}",
"{\"title\": \"The goal of off-policy RL is to solve the original RL task using off-policy data\", \"comment\": \"Thank you for the response and the clarifications.\\n\\nThe goal of off-policy RL is to solve the original RL task using\\noff-policy data. I don't think this is controversial. Any newly\\ndefined objective, such as $\\\\int d^b V(s) ds$ is useful only in so far\\nas it helps in solving this original RL task. In fact, all researchers\\nwho claim to use this off-policy objective, never actually evaluate the\\nperformance of their algorithms based on this objective. To evaluate\\nthe performance, everyone uses the episodic returns corresponding to\\nthe original RL objective that sums the rewards over the trajectory\\ndistribution, not the value functions, because that is the true\\nobjective they are interested in (also the work by the authors here\\ndoes the same).\\n\\nReiterating my main argument, a simple sanity check for any off-policy\\npolicy gradient theorem is whether the theorem gives an unbiased gradient\\nestimator in the on-policy case. This sanity checks works for the previous\\ngradient theorems by Degris et al and Silver et al, which are unbiased in\\nthe on-policy case, but not for the method proposed in this paper; the\\ngradient would be biased in the on-policy case due to adding an\\nunnecessary $\\\\nabla_\\\\theta Q$ term. Due to this, I cannot see how the\\nnewly proposed policy gradients could be conceived as being somehow\\ntheoretically more sound than the original works by Degris or Silver.\\nWhile these previous works are biased due to ignoring the distribution shift\\nfrom $d^\\\\pi$ to $d^b$, the current work is biased due to both ignoring\\nthe distribution shift, and due to adding the extra gradient term.\\n\\nIndeed Degris and Silver mentioned that they believe they should have\\nthis extra $\\\\nabla_\\\\theta Q$ term. However, Degris mentioned this\\nbecause they considered a different setting where $d$ is not the\\ndiscounted state visitation distribution, but the stationary\\ndistribution as $t\\\\to \\\\infty$, while Silver just copied Degris's\\nreasoning, but failed to mention that they had swapped $d$ to be the\\ndiscounted state visitation distribution without providing any\\njustification for why they swapped it. Moreover, the algorithms they\\nended up with in the end matter more than what their intentions were\\nwhen creating their algorithm (I believe their intentions are\\nirrelevant to deciding which policy gradient theorems are more sound).\\n\\nI'm afraid I am voting to reject the paper unless the newly proposed\\ntheorems are removed, or the problem setting is changed so that the\\nnew theorems would correspond to the problem setting (as described in my\\noriginal review). Perhaps another way I may be able to accept is if the\\npaper were to admit that probably it is incorrect to add the extra term,\\nbut they choose to add it anyway to explore the idea (it is true that\\nthere is some confusion in the literature about this). However, currently\\nthe paper claims that the new theorems are more correct than the\\nprevious ones, and I don't find that appropriate.\", \"about_some_other_details_in_the_response\": \"Regarding the references I provided for correcting the distribution shift,\\nindeed these were not fully appropriate; I took them from another publication,\\nbut should have checked them more carefully, and I am sorry. However, the main point\\nof these works was that off-policy corrections can be performed by chaining\\nimportance weights together over the whole trajectory. To perform the\\noff-policy correction from $d^b$ to $d^\\\\pi$, it is necessary to know\\nthe importance weight $d^\\\\pi_t/d^b_t$ for each time step $t$. These\\ndistributions can be expressed as\\n$d_t(s) = \\\\int \\\\mu_0(s_0)\\\\pi(a_0|s_0)p(s_1|s_0,a_0)\\\\pi(a_1|s_1)...p(s_t|s_{t-1}, a_{t-1}) ds_0da_0ds_1...ds_{t-1}da_{t-1}$, where the $p$ is the transition dynamics.\\nThen, by writing\\n$\\\\int d_t(s_t)\\\\pi(a_t|s_t) \\\\nabla_\\\\theta\\\\log \\\\pi(a_t|s_t)Q(s,a) ds_tda_t = \\\\int \\\\pi(a_t|s_t)\\\\nabla_\\\\theta\\\\log \\\\pi(a_t|s_t)Q(s,a)da_t\\\\mu_0(s_0)\\\\pi(a_0|s_0)p(s_1|s_0,a_0)\\\\pi(a_1|s_1)...p(s_t|s_{t-1}, a_{t-1}) ds_0da_0ds_1...ds_{t-1}da_{t-1}$,\\nit becomes possible to compute an unbiased policy gradient by applying importance weights along a sampled trajectory.\\nBy taking the ratio, the transition dynamics disappears, because it is\\nthe same for both policies, and one is left with the importance weight\\n$\\\\Pi_{t=0}^{t-1}\\\\frac{\\\\pi_\\\\theta(a_t|s_t)}{\\\\pi_b(a_t|s_t)}$ that depends only\\nthe actions chosen along the trajectory. This can allow to correct for the\\ndistribution shift, and the methods such as Retrace, etc. could be useful\\nin trading off bias and variance in this importance sampling correction.\"}",
"{\"title\": \"Response to reviewer #4 with comments and improvements [1/4]\", \"comment\": \"We thank the reviewer for their valuable and detailed feedback. The insightful comments provided by the reviewer have helped us to significantly improve our submission.\", \"below_are_our_specific_responses_to_the_concerns_raised_by_the_reviewer\": \">The theoretical issues in this paper start in equation 1. They write: \\\"... we can express the maximization of the expected cumulative reward in terms of the state-value function:\\\"\\n$$J(\\\\pi_{\\\\theta}) = \\\\int_{\\\\mathcal{S}} d^{\\\\pi_{\\\\theta}}(s) V^{\\\\pi_{\\\\theta}}(s) \\\\mathrm{d} s$$\\n (in the paper)\\nwhere $d(s)$ is the discounted state visitation distribution. However, this is not the RL objective. The RL objective would be\\n$$J(\\\\pi_{\\\\theta}) = \\\\int_{\\\\mathcal{S}} d^{\\\\pi_{\\\\theta}}(s) R(s) \\\\mathrm{d} s$$\\n (what it should actually be)\\n\\nLet us start from the on-policy setting. There was a major point of confusion when we claimed that the on-policy objective is $J(\\\\pi_{\\\\theta}) = \\\\int_{\\\\mathcal{S}} d^{\\\\pi_{\\\\theta}}(s) V^{\\\\pi_{\\\\theta}}(s) \\\\mathrm{d} s$ and we agree that the RL objective in the on-policy case is $J(\\\\pi_{\\\\theta}) = \\\\int_{\\\\mathcal{S}} \\\\mu_0(s) V^{\\\\pi_{\\\\theta}}(s) \\\\mathrm{d} s$, which becomes $J(\\\\pi_{\\\\theta}) = \\\\int_{\\\\mathcal{S}} \\\\mu_0(s) V(s,\\\\theta)\\\\mathrm{d} s$ using PVFs. \\n\\nIn the on-policy formulation, the gradient of the action-value function with respect to the policy parameters is not present in the original on-policy policy gradient theorems (Th 1 in Sutton, 1999 [1] and Th1 in Silver, 2014 [2]). This term is also not present in the on-policy policy gradient theorem with PVFs. In the on-policy case we can expand it using Bellman equation and following the exact same procedure as in Sutton, 1999 [1] and Silver, 2014 [2] we obtain an expression that depends on $d^{\\\\pi_{\\\\theta}}(s)$ (see Th 3.1 and Th 3.2 in revised pdf). \\n \\n \\nIn the off-policy case, however, this is different. A widely used objective for off-policy RL is $J_b(\\\\pi_{\\\\theta}) = \\\\int_{\\\\mathcal{S}} d^{\\\\pi_b}(s) V^{\\\\pi_{\\\\theta}}(s) \\\\mathrm{d} s$, where $d^{\\\\pi_b}$ is the limiting distribution of the states under $\\\\pi_b$ for continuing tasks[3,4] or the discounted weighting of states encountered starting at $s_0 \\\\sim \\\\mu_0(s)$ and following the policy $\\\\pi_{b}$[2] when we have access to trajectories. In our work we use the latter, although our results can be easily extended to the continuing setting.\\n\\n When taking the gradient of the off-policy objective, either in standard RL or with PVF, the gradient of the action-value function with respect to the policy parameters must be estimated and can no longer be condensed into $d^{\\\\pi_{\\\\theta}}$, since we have only access to $d^{\\\\pi_b}$. In DPG[2], this term is completely ignored in eq. 15 as \\\"Analogous to the stochastic case (see Equation 4), we have dropped a term that depends on $\\\\nabla_{\\\\theta}Q^{\\\\pi_{\\\\theta}}(s,a)$; justification similar to Degris et al. (2012b)[3] can be made in support of this approximation\\\". On the other hand, Degris et al.[3] claim \\\"The final term in this equation, $\\\\nabla_{\\\\theta}Q^{\\\\pi_{\\\\theta}}(s,a)$, is difficult to estimate in an incremental off-policy setting. The first approximation involved in the theory of OffPAC is to omit this term\\\". They provide a justification on this approximation proving that the set of stationary points of the approximated gradient is included in the set of stationary points for the true gradient. However, in the off-policy setting, this is true only when the policy is tabular (see errata in section B of Degris et al.[3]). PAVFs can directly estimate this term, providing a more theoretically sound off-policy policy gradient for $J_b$.\\n\\n[1]Richard S. Sutton, David McAllester, Satinder Singh, and Yishay Mansour.Policy gradient methods for reinforcement learning with function approximation. In Proceedings of the 12th International Conference on Neural Information Processing Systems, NIPS\\u201999, pages 1057\\u20131063, Cambridge,MA, USA, 1999. MIT Press. \\n[2]David Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and Martin Riedmiller. Deterministic policy gradient algorithms. In Proceedings of the 31st International Conference on International Conference on Machine Learning - Volume 32, ICML\\u201914, pages I\\u2013387\\u2013I\\u2013395.JMLR.org, 2014. \\n[3]Thomas Degris, Martha White, and Richard S. Sutton. Off-policy actor-critic. In Proceedings of the 29th International Conference on International Conference on Machine Learning, ICML\\u201912, pages 179\\u2013186, USA, 2012.Omnipress. \\n[4] Ehsan Imani, Eric Graves, and Martha White. An off-policy policy gradient theorem using emphatic weightings. In Advances in Neural Information Processing Systems, pages 96\\u2013106, 2018.\"}",
"{\"title\": \"Response to reviewer #4 with comments and improvements [2/4]\", \"comment\": \">My strongest argument for why the original off-policy derivations by Degris et al and Silver et al are less flawed is the following: If we are on-policy, i.e. $\\\\pi_b = \\\\pi$ and $d^{\\\\pi_b} = d^{\\\\pi}$we would want the off-policy policy gradient theorem to be unbiased, hence it should revert to the standard policy gradient theorem. In the formulations of Degris and Silver, this is indeed the case, and these theorems would be unbiased in the on-policy setting. The new theorem in the current paper, on the other hand, would have an extra $dQ/d\\\\theta$ term, which would bias the gradient. Therefore, I do not see any good theoretical reason to add this term. Moreover, the practical performance did not improve, so there is little evidence to suggest it as a heuristic either.\\n\\nIf we were optimizing $J_b$ on-policy, the off-policy policy gradients in Degris et al. and Silver et al. would be equivalent to the on-policy policy gradient only because they make the aforementioned approximation of dropping $\\\\nabla_{\\\\theta}Q^{\\\\pi_{\\\\theta}}(s,a)$. If they were able to estimate $\\\\nabla_{\\\\theta}Q^{\\\\pi_{\\\\theta}}(s,a)$, they would also have an additional term. The fact that if we take $d^{\\\\pi_b} = d^{\\\\pi_{\\\\theta}}$ and $\\\\pi_b = \\\\pi_{\\\\theta}$ we do not have the on-policy policy gradient theorem is a problem of the objective function, which does not consider the distribution shift and NOT a problem of our method, which is an improvement upon Degris et al. and Silver et al. If we accept $\\\\int_{\\\\mathcal{S}} d^{\\\\pi_b}(s) V^{\\\\pi_{\\\\theta}}(s) \\\\mathrm{d} s$ as off-policy objective, which is the objective that many researchers are using, our off-policy policy gradients are exact.\\n\\n\\n>they replace the distribution $d^{\\\\pi}(s)$ with a distribution $d^{\\\\pi_b}(s)$ \\n gathered using a behavioral policy (so they are working off-policy). However, they do not apply an importance weighting correction for the distribution shift, and just ignore the importance weights (Note that this is also done by Silver et al (2014) in deterministic policy gradients, and by Lillicrap et al (2015) in DDPG, so it is not that strange per se, as long as it gives better practical performance. However, it should at least be acknowledged that the importance weights are being ignored). Note that they still apply an importance weight on the actions ($\\\\pi(a|s)/\\\\pi_b(a|s)$) once the state is sampled from the data buffer, however, this does not correct for the distribution shift from $d^{\\\\pi}$ to $d^{\\\\pi_b}$, so the policy gradient computed using such a method will necessarily be biased. For example, see the following works for examples that try to deal with the distribution shift problem: Munos et al (2016, https://arxiv.org/abs/1606.02647), Wang et al (2016, https://arxiv.org/abs/1611.01224), Gruslys et al (2017, https://arxiv.org/abs/1704.04651)\\n\\nWe agree with the reviewer that we should acknowledge that in our off-policy formulation we are ignoring the distribution shift from $d^{\\\\pi_b}(s)$ to $d^{\\\\pi_{\\\\theta}}(s)$. We related this to recent methods trying to solve the distribution shift problem [5]. However, it seems to us that the papers suggested by the reviewer do not deal with the distribution shift problem from $d^{\\\\pi_b}(s)$ to $d^{\\\\pi_{\\\\theta}}(s)$. They deal instead with the variance introduced by the importance weights (IWs) $\\\\frac{\\\\pi_{\\\\theta}(a|s)}{\\\\pi_{b}(a|s)}$ and the bias introduced when IWs are clipped. In particular, ACER(Wang et al), is mentioned even in a paper on the distribution shift problem[5] as one of the methods that, like ours, completely ignore the distribution shift. It is worth mentioning that ACER(Wang et al) is using an off-policy formulation similar to ours. They start from the formulation of Degris et. al. with $d^{\\\\pi_b}(s)$ being the stationary distribution of the behavioral policy and they then approximate it using trajectories from the behavioral policy. We instead, similar to Silver et. al., define directly the off-policy objective with respect to the data obtained from the trajectories.\\n\\n>Another more minor theoretical issue in the paper is that while the theory considered the discounted state visitation distribution, the discount factors are not added into the policy gradient in the algorithmic sections. This omission is common, and tends to work well as a heuristic (but it should at least be mentioned that such an approximation is made). See the following papers for more discussion on this: Nota and Thomas (2020, https://arxiv.org/abs/1906.07073) Thomas (2014, http://proceedings.mlr.press/v32/thomas14.html)\\n\\nWe mentioned the omission of the discount factor from $d^{\\\\pi_b}(s)$ when deriving algorithms for $V(s,\\\\theta)$ and $Q(s,a,\\\\theta)$. We thank the reviewer for pointing out this issue.\\n\\n\\n[5]Yao Liu, Adith Swaminathan, Alekh Agarwal, and Emma Brunskill. Off-policy policy gradient with state distribution correction.arXiv preprintarXiv:1904.08473, 2019.\"}",
"{\"title\": \"Response to reviewer #4 with comments and improvements [3/4]\", \"comment\": \">some discussion around off-policy learning seemed incomplete.\\n\\nWe expanded our related work section including other algorithms that try to improve the off-policy policy theorem from Off-PAC.\\n\\n>Then they test a similar zero-shot learning procedure as they did for $V(\\\\theta)$ at different stages of the learning (but as far as I understood, for $V(s,\\\\theta)$ they sampled data from the replay buffer when training the policy (thus not fully without interacting with the data). Perhaps the authors can clarify this), and show that the newly learned policy can outperform the behavior policy, thus demonstrating the generalizability of the method.\\n\\nWe would like to clarify our terminology of the 'offline setting': The requirement for an RL task to be offline is that there is no additional interaction with the environment when optimization is started. In our offline RL experiment, the critic needs to interact with the offline dataset in order to be trained. In an offline setting it is OK to use the data in order to perform policy gradient updates. The only important aspect is that no additional data is coming from the environment after we start learning. \\n \\n>Test the parameter value functions using the standard policy gradients without adding the $dQ/d\\\\theta$ term. Because you are using $Q(s,a,\\\\theta)$, there may be some learning to generalize across different policies due to the theta input, so it may outperform the original policy gradients without changing the policy gradient theorem. Actually, it would have been better to perform such experiments as an ablation study from the beginning anyhow.\\n\\nWe included in Appendix A.3.1 an ablation when training PAVF without the last part of the gradient $\\\\nabla_\\\\theta Q(s,a,\\\\theta)$ in Swimmer and Hopper and with shallow and deep policies. We used the same procedure as for the original PAVF in order to tune the hyperparameters and evaluate the final performance on 20 different seeds. From the results, we observed that using the biased gradient (the one without $\\\\nabla_\\\\theta Q(s,a,\\\\theta)$) the performance dropped significantly in Swimmer. In Hopper we observed a much smaller drop, possibly because both algorithms are converging to a sub-optimal behavior.\\n\\n>Test also $V(s,\\\\theta)$ on LQR as well as on zero-shot learning while sampling s from the initial state distribution $\\\\mu(s)$. This does not require interacting with the environment (because you never apply any action), and I would consider it fair in terms of comparing to $V(\\\\theta)$. If the learning from the TD error is working well, I would expect it to outperform the $V(\\\\theta)$ formulation in the zero-shot task. \\n\\n>Test also $Q(s,a,\\\\theta)$ on the LQR task to show it's correctness (for example by sampling s from the initial state distribution and computing the action). It may also be nice to test it in the zero-shot task as well.\\n\\nWe are currently running additional experiments on zero-shot learning using $V(s,\\\\theta)$ and $Q(s,a,\\\\theta)$ in different environments and we will include them before the end of the rebuttal. However, it is difficult to have a fair comparison between them since, in this task, data are coming from the policy we are learning and thus strongly depend on the algorithm being used. In other words, in order to disentangle generalization and learnability, we would like to learn zero-shot having access to the same data. Perhaps a fair comparison would include an offline scenario like in our last experiment in which first we collect full trajectories using some policies $\\\\pi_b$, then train offline PSSVF, PSVF and PAVF and finally train their policies. \\n\\nThe purpose of our LQR experiment was to show how $V_w(\\\\theta)$ is able to approximate $J(\\\\pi_{\\\\theta})$ over the policy parameter space. We did not compare this with $V(s,\\\\theta)$ and $Q(s,a,\\\\theta)$ because in order to represent these value functions in a 2D plot we would have to remove the bias from the policy, hence the comparison would not be fair. Before the end of the rebuttal, we will provide some visualization plots of $V(s,\\\\theta)$ and $Q(s, \\\\pi(s), \\\\theta)$ as a function of $s$ and $\\\\theta$, where a deterministic policy with one weight and no bias is used for learning. Note that for the LQR experiment we minimally tuned the hyperparameters because our goal was simply to visualize the algorithm operation and not to achieve the best performance.\"}",
"{\"title\": \"Response to reviewer #4 with comments and improvements [4/4]\", \"comment\": \">Currently the most compelling evidence is the zero-shot task, which shows that there is some generalization happening in the $\\\\theta$ space; however, what is missing to me, is a demonstration of how this additional generalization helps in solving the original task in a more data-efficient manner. Perhaps interleaving the policy search with longer sessions of off-line learning (without any interaction) using $dV/d\\\\theta$ to take advantage of the generalization may improve the data-efficiency and show the advantage of the new method (exaplaining good practices on how to do this may be a useful contribution).\\n\\nWe would like to emphasize that the generalization observed in the zero-shot learning experiments is already affecting the main results. In particular, with PSSVFs we are alternating online interactions with the environment where the policy is collecting more data and offline learning, where first the PSSVF is trained for 10 gradient steps and then the policy is trained completely offline for another 10 gradient steps. We found that, across many environments, 10 offline gradient steps were a good tradeoff between exploiting the generalization of V and remaining in the part of the parameter space where V is accurate. Measuring the generalization in the zero-shot learning tasks can be useful for determining the number of offline gradient steps to perform. Our algorithms using PSVFs and PAVFs also perform multiple offline gradient steps, since the behavioral policy is changing every episode, while the policy is updated every 50 timesteps.\\n\\n>I would expect $V(s,\\\\theta)$ to outperform $V(\\\\theta)$ due to using the state information, but this did not appear to be the case.\\n\\n>I think it would also be important to show compelling evidence that including the s input helps in learning better $V$ and $Q$ functions. Perhaps there are also other ways to better show the advantage of the method.\\n\\nWe believe that in most of the cases it is hard to see an improvement of PSVF and PAVF over the simple PSSVF, because our algorithms based on TD learning, despite having the information on the state, have a much more complicated function to learn. Similarly, one could argue that ARS is outperforming DDPG in most of the tasks. Here the most interesting comparison is the one between $V(s,\\\\theta)$, $Q(s,a,\\\\theta)$ and DDPG and the one between $V(\\\\theta)$ and ARS. Apart from Reacher, the PAVF obtained better results than DDPG in Swimmer, MountainCarContinuous and sometimes in Hopper.\\n\\n>How did the computational times compare? Was there much of an overhead to using the more complicated critics including theta as an input?\\n\\nRegarding the computational time, if we were performing the exact amount of gradient steps in PAVF as in DDPG, we would have our algorithm to be 4 times slower when using a 2-layers (64,64) MLP as policy. However, since our PAVF does not need to constantly track a single policy, we need much fewer policy and value functions updates. In the experiments we performed, PAVF with less updates was 30\\\\% faster than DDPG when using a deep policy.\"}",
"{\"title\": \"Response to reviewer #1 with comments and improvements\", \"comment\": \"We thank the reviewer for their valuable feedback.\\nWe have improved our submission, here is a summary:\\n\\n>1- Regarding the first algorithm, PSSVF, until converging, the data that is stored in the replay buffer does not correspond to a \\\"reasonable\\\" policy, unless having a prioritized replay buffer. I am also concerned about it being over fitted to the early policies and not being able to overcome this. I see in the experiments that using PSSVF, policy is converged but am not convinced about it.\\n\\nWe agree with the reviewer that PSSVF might be oversampling initial policies and that prioritized replay could help to sample data more uniformly.\\n We did not include different sampling techniques because we wanted to provide results for the most simple algorithms, limiting the number of tricks necessary.\\n Note that this problem affects less the PSVF and PAVF, since they receive much more data (one per transition instead of one per episode) and in some environments (Swimmer and Hopper) they have a small replay buffer corresponding to 1/10 of the available data during learning.\\n \\n>2- Another concern of mine wrt the proposed PVFs is about their sample efficiency. It would be interesting to see a comparison between DDPG and PVF based methods on their sample efficiency.\\n\\n In our experiments we have already performed an extensive study comparing sample efficiency between DDPG, ARS and PVFs.\\n In particular, figures 2,9 and 10 analyze the sample efficiency on 7 different tasks and 3 policy architectures for all methods. \\n \\n>3- In the experiments section (4.2), expect for a few cases, ARS is either the best one or does not differ significantly from PVF based methods. Having this in mind, my question is that have you tried to find the best set of hyperparameters for ARS and DDPG as well as your proposed method? If the answer is no, I would like to see that experiments where ARS and DDPG have their best set of hyperparameters.\\n\\n In all our experiments we performed an extensive hyperparameter search for ARS and DDPG.\\n The results in figures 2,9 and 10 correspond to the best hyperparameters found in ARS and DDPG.\\n We reported the value of the best hyperparameters for ARS and DDPG in Table 5 and 8 and a sensitivity analysis for all algorithms in figures 11,12,13,14,15.\\n We reported the procedure we used to find the best hyperparameters and to evaluate all algorithms at the beginning of Appendix A.4.3.\\n Note that, apart from Reacher task, ARS is never significantly better than PSSVF.\\n \\n>4- I am also interested in seeing results for deep policy zero shot learning. In section 4.3, authors just mention: \\\"When using deep policies, we obtained similar results only for the simplest environments.\\\" which is not convincing without showing results.\\n\\n Before the end of the rebuttal, we will include some results for zero-shot learning using deep policies, as well as more zero-shot learning results using PSVF and PAVF with linear policies. As mentioned in the paper, with deep policies we will have good zero-shot performance only in the most simple tasks.\\n \\n>5- As mentioned in the last part of the paper, the proposed method hugely suffers from the curse of dimensionality. However, as an initial step, PVFs seem interesting and could be beneficial in terms of learning generalized value functions.\\n\\n We agree that the curse of dimensionality is the main limitation of our approach and we believe that our experimental results provide a strong baseline for methods which will try to reduce the dimensionality of the policy such as policy embeddings.\\n\\n>[1] Sutton, Richard S., et al. \\\"Horde: A scalable real-time architecture for learning knowledge from unsupervised sensorimotor interaction.\\\" The 10th International Conference on Autonomous Agents and Multiagent Systems-Volume 2. 2011.\\n\\nWe did not find this citation in the review. Could the reviewer elaborate on this? We are happy to expand our connections with General Value Functions.\"}",
"{\"title\": \"Response to reviewer #3 with comments and improvements\", \"comment\": \"We thank the reviewer for their interest and suggestions.\\nWe have improved our submission, here is a summary:\\n\\nAs pointed out by the reviewer, $d^{\\\\pi_{\\\\theta}}(s)$ is the discounted weighting of states encountered starting at $s_0 \\\\sim \\\\mu_0(s)$ and following the policy $\\\\pi_{\\\\theta}$ and not a distribution.\\n We modified the background section in order to reflect this. We agree that in our off-policy formulation we are ignoring the distribution shift from $d^{\\\\pi_b}(s)$ to $d^{\\\\pi_{\\\\theta}}(s)$.\\n However, our off-policy objective is widely used with $d^{\\\\pi_b}(s)$ being the discounted weighting of states when working with a start-state formulation[1] or the limiting distribution of states under $\\\\pi_b$ in continuing problems[2,3]\\n There are works trying to correct for the distribution shift and deal with the challenge of estimating $\\\\frac{d^{\\\\pi_{\\\\theta}}(s)}{d^{\\\\pi_b}(s)}$[4].\\n We compared our methods to theirs and acknowledged the bias introduced by this formulation.\\n Note that in theorem 3.1 (theorem 3.3 in the updated version) the importance sampling correction $\\\\frac{\\\\pi_{\\\\theta}(a|s)}{\\\\pi_{b}(a|s)}$ is still required from the action-selection process when using stochastic policies.\\n\\n>PVF: to me this acronym is strongly synonymous with Mahadevan's proto-value functions (PVFs), circa 2007. How about \\\"PBVF\\\" instead? Maybe I'm old\\n\\nWe will investigate the use of the acronym PBVF in the literature and use it instead of PVFs if it provides less overlapping.\\n\\n>we optimize for the undiscounted objective this should be reflected in your notation and problem formulation\\n\\nWe clarified our usage of the discount factor.\\n In particular, when training $V(\\\\theta)$ we ignore the discounting in the reward because we are in the episodic setting.\\n Using $V(s,\\\\theta)$ and $Q(s,a,\\\\theta)$ we want to predict the cumulative expected discounted reward, so we use $\\\\gamma < 1$.\\n When training the actor, we ignore the discount factor in $d^{\\\\pi_b}$.\\n This is a widely used approximation[5,6] and we clarified this in the paper.\\n\\n\\n>can be used only for episodic tasks it doesn't have to. See \\\"regenerative method\\\" in Monte Carlo estimation literature\\n\\nWe mentioned the regenerative method as a possible use of PSSVF for non-episodic tasks.\\n\\n[1]David Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and Martin Riedmiller. Deterministic policy gradient algorithms. In Proceedings of the 31st International Conference on International Conference on Machine Learning - Volume 32, ICML\\u201914, pages I\\u2013387\\u2013I\\u2013395.JMLR.org, 2014. \\n[2]Thomas Degris, Martha White, and Richard S. Sutton. Off-policy actor-critic. In Proceedings of the 29th International Conference on International Conference on Machine Learning, ICML\\u201912, pages 179\\u2013186, USA, 2012.Omnipress. \\n[3] Ehsan Imani, Eric Graves, and Martha White. An off-policy policy gradient theorem using emphatic weightings. In Advances in Neural Information Processing Systems, pages 96\\u2013106, 2018. \\n[4] Yao Liu, Adith Swaminathan, Alekh Agarwal, and Emma Brunskill. Off-policy policy gradient with state distribution correction.arXiv preprintarXiv:1904.08473, 2019. \\n[5] Philip Thomas. Bias in natural actor-critic algorithms. In International conference on machine learning, pages 441\\u2013448, 2014. \\n[6] Chris Nota and Philip S Thomas. Is the policy gradient a gradient? arXivpreprint arXiv:1906.07073, 2019.\"}",
"{\"title\": \"Response to reviewer #2 with comments and improvements\", \"comment\": \"We thank the reviewer for their valuable feedback.\\nWe have improved our submission, here is a summary:\\n> The paper is clearly written for the most part, with the exception of some parts of the related work that are overly terse (i.e., the connection with UVFAs could be expanded). Other parts of the related work seem frankly unrelated (i.e., predicting gradients of RNNs from their inputs in the 90s, and mapping weights of CNNs to their accuracy), and I would recommend removing them in favour of moving the more detailed comparison of PENs and PVFs into the main paper. \\n\\n>The paper doesn\\u2019t mention related work that fixes the Off-PAC policy gradient theorem, which gives an expression for the true gradient of the off-policy objective without requiring PVFs (Imani 2018).\\n\\nWe expanded the discussion on the connection with UVFAs, PENs, and alternative approaches for deriving an off-policy policy gradient theorem. We believe that it is important to mention a connection with synthetic gradients[1,2], because although they focus on supervised learning tasks, they include the possibility of learning maps from policies activations to gradients, losses or cumulative rewards, which is a setting similar to ours.\\n\\n>I was disappointed to see that only the deterministic algorithms were implemented and analysed. Even if the stochastic versions of the algorithm are only demonstrated in a simple linear setting, that would be better than just not investigating them at all.\\n\\nWe agree with the reviewer on the importance of evaluating also stochastic policies. We are currently running more experiments and we will include some results for stochastic policies for the algorithms using $V(\\\\theta)$ and $V(s,\\\\theta)$.\\n\\n>Passing the actor\\u2019s parameters to the critic seems to necessarily break the requirement of compatible features for the actor to follow the true gradient of performance (Sutton 2000). It might be good to mention this.\\n\\nThe reviewer suggested that PVFs might avoid the requirement of compatible function approximation.\\n Unfortunately, with PVFs there are still linear conditions to be satisfied in order for $V_{w}$ or $Q_{w}$ to follow the true gradient.\\n In particular, $V_w(\\\\theta)$ needs to be linear in the policy parameters; $V_w(s,\\\\theta)$ needs to be linear in the policy parameters and in some fixed feature of the state; for $Q_w(s,a,\\\\theta)$ the conditions are identical to Off-PAC[3] and DPG[4], except for the requirement of Q to be linear in the policy parameters in the off-policy setting.\\n We did not include these results because in the experiments we are using nonlinear value functions.\\n However, they will be important when studying the convergence of the algorithms under linear value function approximation.\\n We mentioned these conditions in the updated version of the paper.\\n\\n[1] J\\u00fcrgen Schmidhuber. Networks adjusting networks. In Proceedings of \\u201dDistributed Adaptive Neural Information Processing\\u201d, pages 197\\u2013208, 1990 \\n[2] Max Jaderberg, Wojciech Marian Czarnecki, Simon Osindero, Oriol Vinyals, Alex Graves, David Silver, and Koray Kavukcuoglu. Decoupled neural interfaces using synthetic gradients. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1627\\u20131635.JMLR. org, 2017. \\n[3]Thomas Degris, Martha White, and Richard S. Sutton. Off-policy actor-critic. In Proceedings of the 29th International Conference on International Conference on Machine Learning, ICML\\u201912, pages 179\\u2013186, USA, 2012.Omnipress. \\n[4]David Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and Martin Riedmiller. Deterministic policy gradient algorithms. In Proceedings of the 31st International Conference on International Conference on Machine Learning - Volume 32, ICML\\u201914, pages I\\u2013387\\u2013I\\u2013395.JMLR.org, 2014.\"}",
"{\"title\": \"Interesting idea that is well-investigated\", \"review\": \"### Summary:\\nThe paper proposes passing the parameters of a policy to the value function attempting to learn estimates of the return for that policy. This allows the value function to generalize across policies and estimate values for arbitrary policies. The paper derives several algorithms for various objectives and value functions, and empirically investigates the deterministic versions.\\n\\n### Pros:\\n- Several new algorithms are proposed\\n- The new algorithms can generalize across policies\\n- The new algorithms can estimate the value of unseen policies\\n\\n### Cons:\\n- Only the deterministic algorithms are empirically investigated\\n- Computation and memory cost seem quite high (the critic takes all of the actor\\u2019s parameters as arguments)\\n- Empirical results seem mixed\\n\\n### Decision\\nI recommend accepting the paper for publication.\\n\\nThe paper investigates a simple, interesting, original idea\\u2014including the actor\\u2019s parameters as inputs to the critic\\u2014fairly thoroughly. Several actor-critic algorithms are derived using expressions for the gradient of various performance measures obtained by including the actor\\u2019s parameters as inputs to the critic.\\n\\nThe benefits of doing this are illustrated by some experiments, and the deterministic versions of the new methods are compared with reasonable competitors (DDPG and ARS) in other experiments. Unfortunately the results seem somewhat limited by the number of runs that can be conducted by parameterizing the policies and value functions as neural networks and experimenting on the chosen environments. Overall the empirical results seem mixed; in many environments it\\u2019s fine to just disregard the second part of the gradient that is dropped in DDPG and computed by PVFs. However, that\\u2019s not the fault of the new algorithms, and there are some environments where not dropping the second part of the gradient is helpful.\\n\\nThe paper is clearly written for the most part, with the exception of some parts of the related work that are overly terse (i.e., the connection with UVFAs could be expanded). Other parts of the related work seem frankly unrelated (i.e., predicting gradients of RNNs from their inputs in the 90s, and mapping weights of CNNs to their accuracy), and I would recommend removing them in favour of moving the more detailed comparison of PENs and PVFs into the main paper.\\n\\n### Miscellaneous comments:\\n- Grammatical error in the final sentence of the abstract: \\u201cTheir performance is comparable to the one of state-of-the-art methods\\u201d\\n- \\u201cIn practice, like in standard actor-critic algorithms, we use a noisy version of the current learned policy in order to act in the environment and collect data\\u201d This should probably read standard deterministic actor-critic algorithms.\\n- I was disappointed to see that only the deterministic algorithms were implemented and analysed. Even if the stochastic versions of the algorithm are only demonstrated in a simple linear setting, that would be better than just not investigating them at all.\\n- The paper doesn\\u2019t mention related work that fixes the Off-PAC policy gradient theorem, which gives an expression for the true gradient of the off-policy objective without requiring PVFs (Imani 2018).\\n- Passing the actor\\u2019s parameters to the critic seems to necessarily break the requirement of compatible features for the actor to follow the true gradient of performance (Sutton 2000). It might be good to mention this.\\n\\n### References:\\n1. Imani, E., Graves, E., & White, M. (2018). An off-policy policy gradient theorem using emphatic weightings. In Advances in Neural Information Processing Systems (pp. 96-106).\\n2. Sutton, R. S., McAllester, D. A., Singh, S. P., & Mansour, Y. (2000). Policy gradient methods for reinforcement learning with function approximation. In Advances in neural information processing systems (pp. 1057-1063).\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting connection to DPG, few technical errors\", \"review\": \"On page 2, in the background section: the discounted state distribution, what you wrote is not a distribution (doesn't sum to 1). In order to define this $d^{\\\\pi_\\\\theta}$ properly, you can multiply everything by $1-\\\\gamma$. The interpretation is that you \\\"reset\\\" in your initial distribution $\\\\mu_0$ with probability $1 - \\\\gamma$ at every step, or continue in the discounted stationary distribution with probability $\\\\gamma$.\\n\\nIn think that theorem 3.1 is incorrect. I think that this is meant to describe an off-policy setting where we are collecting data from $\\\\pi_b$ but want the policy gradient for $\\\\pi_\\\\theta$. In this case, the importance sampling weight should be $\\\\frac{d_\\\\theta(s,a)}{d_b(s,a)}$ not $\\\\frac{\\\\pi_\\\\theta(a|s)}{\\\\pi_b(a|s)}$ (where $d_b$ is the discounted stationary distribution, see above comment too). Equation 9 follows from the chain rule (because the Q function now depends on $\\\\theta$ explicitly) using the off-policy formulation in Degris (2012), which is incorrect.\", \"notes\": \"- PVF: to me this acronym is strongly synonymous with Mahadevan's proto-value functions (PVFs), circa 2007. How about \\\"PBVF\\\" instead? Maybe I'm old\\n\\n> we optimize for the undiscounted objective\\nthis should be reflected in your notation and problem formulation\\n\\n> can be used only for episodic tasks\\nit doesn't have to. See \\\"regenerative method\\\" in Monte Carlo estimation literature\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Interesting in terms of idea, but unclear advantage over common approaches\", \"review\": \"\\u2014 idea:\\n\\t\\nA new class of value functions is introduced where the value function takes the parameters of the policy as input, in addition to its common inputs (state or state-action). The proposed type of value functions, PVFs, are also useful for off-policy learning and generalizing over policies while the common value functions lost their information about previous policies.\\n\\n\\u2014 comments:\\n \\n1- Regarding the first algorithm, PSSVF, until converging, the data that is stored in the replay buffer does not correspond to a \\\"reasonable\\\" policy, unless having a prioritized replay buffer. I am also concerned about it being over fitted to the early policies and not being able to overcome this. I see in the experiments that using PSSVF, policy is converged but am not convinced about it.\\n \\n2- Another concern of mine wrt the proposed PVFs is about their sample efficiency. It would be interesting to see a comparison between DDPG and PVF based methods on their sample efficiency.\\n \\n3- In the experiments section (4.2), expect for a few cases, ARS is either the best one or does not differ significantly from PVF based methods. Having this in mind, my question is that have you tried to find the best set of hyperparameters for ARS and DDPG as well as your proposed method? If the answer is no, I would like to see that experiments where ARS and DDPG have their best set of hyperparameters.\\n \\n4- I am also interested in seeing results for deep policy zero shot learning. In section 4.3, authors just mention: \\\"When using deep policies, we obtained similar results only for the simplest environments.\\\" which is not convincing without showing results.\\n \\n5- As mentioned in the last part of the paper, the proposed method hugely suffers from the curse of dimensionality. However, as an initial step, PVFs seem interesting and could be beneficial in terms of learning generalized value functions.\\n\\n-- minor issues:\\n \\nIn the first line of the paragraph above the experiments section (4), starting with \\\"Algorithm 4 (Appendix) uses an ...\\\", there is a redundant \\\"and\\\". One of them should be removed.\\n\\n\\n\\nOverall, I liked the idea presented in the paper and would like to see what their next step would be. But the current version of the paper could benefit from more in depth experiments. I believe the most important weakness of the paper lies in the experiment section. It can be much richer and more insightful.\\n\\n\\n\\n[1] Sutton, Richard S., et al. \\\"Horde: A scalable real-time architecture for learning knowledge from unsupervised sensorimotor interaction.\\\" The 10th International Conference on Autonomous Agents and Multiagent Systems-Volume 2. 2011.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Reject for theoretical reasons. (Update: the theoretical issues were cleared up)\", \"review\": \"**Update**\\n\\nI have updated my score to 7.\\nOne of the points that was not explained in the original paper was that (ignoring function approximation effects) an optimal solution for $J_b$ (the OffPAC objective) will be optimal also for the original off-policy RL objective $J$ (i.e. estimating the on-policy objective in an unbiased manner from off-policy data). From this point of view, I agree that optimizing $J_b$ directly is an interesting question, despite the fact that the exact gradient for $J_b$ may be less similar to the gradient of $J$ compared to the usually used approximate gradient of $J_b$ that drops the $\\\\nabla_\\\\theta Q$ term. It still remains unclear which of the two methods has a theoretical advantage over the other in the function approximation setting (in terms of optimizing for $J$); however, because it is unclear, it is interesting to evaluate the method proposed here and to perform experiments as done in the paper to try to find out which method performs better.\\n\\nThe results were mixed; however, the evaluation is fairly thorough and some potential advantages of the new methods such as generalization in the $\\\\theta$ space and zero-shot learning were explained.\\n\\nThe discussion in the paper is much improved compared to the original version. Also, additional ablation studies such as testing what happens when the $\\\\nabla_\\\\theta Q$ term is dropped were added (when $Q$ includes $\\\\theta$ as an input). Moreover, LQR experiments for $Q(s,a,\\\\theta)$ and $V(s,\\\\theta)$ were added in the appendix (the results here do not give as good a match as the $V(\\\\theta)$ formulation gave, but they are reasonable).\\n\\n______________________________________________________________\\n1. Summarize what the paper claims to contribute. Be positive and\\ngenerous.\\n\\nThey propose to include the policy parameters as an input\\nto the value function, so that the value function could generalize\\nacross different policies (there are 2 other concurrent works with a similar\\nidea, one they have cited and discussed \\\"Policy evaluation networks\\\"\", \"https\": \"//arxiv.org/abs/2002.11833, another is submitted to ICLR2021 on\", \"openreview_https\": \"//openreview.net/forum?id=V4AVDoFtVM).\\nThey put the policy parameters theta as an input to the value function\\nin 3 cases $V(\\\\theta)$ (PSSVF), $V(s, \\\\theta)$ (PSVF), and $Q(s,a,\\\\theta)$ (PAVF).\\nThey propose new policy gradient theorems for the $V(s,\\\\theta)$ and\\n$Q(s,a,\\\\theta)$ cases (but I believe these to be theoretically flawed).\\n\\nThey perform experiments testing $V(\\\\theta)$ in 2 cases: 4.1) (sanity check\\nexperiment) visualizing and testing for correctness on an LQR task,\\n4.3) zero-shot learning: after training a policy pi using the $V(\\\\theta)$ method,\\na new policy is reinitialized pi_new and trained from scratch using only the\\ntrained $V(\\\\theta)$ without interacting with the environment. The interesting\\nbit was that $\\\\pi_{new}$ managed to outperform the learned policy during\\ndata collection $\\\\pi$ (this implies that the $V(\\\\theta)$ function managed to\\ngeneralize. It would have been nice to, in addition to $\\\\pi_{new}$, also\\nsee whether $\\\\pi$ could have been improved by just continuing to optimize\\nit without interacting with the environment, but this was not done).\\n\\nThey tested $V(\\\\theta)$, $V(s,\\\\theta)$ and $Q(s,a,\\\\theta)$ on MuJoCo tasks\\ncompared to augmented random search (this is similar to evolution\\nstrategies) and to deep deterministic policy gradients (DDPG). And\\nthe performance did not change much, and sometimes all the new methods\\nfailed when DDPG worked (on the reacher task).\\n\\nThe final experiment 4.4 was for offline learning with fragmented\\nbehaviors, i.e. they do not observe full episode data for a fixed\\ntheta, which makes it impossible to learn $V(\\\\theta)$ directly, but\\n$V(s,\\\\theta)$ can be learned by TD methods (also note that the data is\\ncollected from a different behavior policy). Then they test a similar\\nzero-shot learning procedure as they did for $V(\\\\theta)$ at different\\nstages of the learning (but as far as I understood, for $V(s,\\\\theta)$ they sampled\\ndata from the replay buffer when training the policy (thus not fully without\\ninteracting with the data). Perhaps the authors can clarify this), and\\nshow that the newly learned policy can outperform the behavior policy,\\nthus demonstrating the generalizability of the method.\\n\\n\\n2. List strong and weak points of the paper. Be as comprehensive as possible.\\n\\n\\\\+ The experiment on zero-shot learning is nice to show that the $V(\\\\theta)$\\nfunction can generalize.\\n\\\\+ The paper is clearly written.\\n\\\\+ They discuss a lot of related work.\\n\\\\+ The experimental methodology seemed mostly good and honest, and\\nwas explained in detail in the appendix (some nice points: They include\\na sensitivity analysis showing quantiles of the performance.\\nAlso the final best chosen hyperparameters were evaluated with\\n20 new seeds, separate from the 5 seeds used during hyperparameter tuning).\\n\\n\\\\- The new policy gradient theorems seemed flawed. Also some discussion\\naround off-policy learning seemed incomplete.\\n\\\\- The methods were not shown to experimentally lead to major gains.\\n\\\\- One of the difficulties with searching in parameter space is how\\nto deal with large parameter spaces. The two concurrent works considering\\n$V(\\\\theta)$ proposed solutions to this issue by embedding the policy into\\na smaller space. In the current work no solution is proposed. The\\nexperiments on zero-shot learning using $V(\\\\theta)$ were only good with\\nlow-dimensional linear policies.\\n\\\\- A sanity check experiment on LQR was performed for only $V(\\\\theta)$ (which was\\nthe only one for which the gradient was theoretically sound); it would\\nhave been good to do similar experiments for the other ones.\\n\\\\- I would expect $V(s,\\\\theta)$ to outperform $V(\\\\theta)$ due to using the\\nstate information, but this did not appear to be the case.\\n\\n\\n3. Clearly state your recommendation (accept or reject) with one or two\\nkey reasons for this choice.\\n\\nI recommend rejecting the paper due to the theoretical flaws in the newly\\nproposed policy gradient theorems using $V(s,\\\\theta)$ and $Q(s,a,\\\\theta)$. Also,\\nthe practical advantages of using $V(s,\\\\theta)$ and $Q(s,a,\\\\theta)$ were not shown.\\n\\n\\n4. Provide supporting arguments for your recommendation.\\n\\nThe theoretical issues in this paper start in equation 1. They write:\\n\\\"... we can express the maximization of the expected cumulative reward\", \"in_terms_of_the_state_value_function\": \"\\\"\\n\\n$J(\\\\pi_\\\\theta) = \\\\int d^{\\\\pi}(s) V(s) ds,$ (in the paper)\\n\\nwhere $d(s)$ is the discounted state visitation distribution. However, this\\nis not the RL objective. The RL objective would be\\n\\n$J(\\\\pi_\\\\theta) = \\\\int d^{\\\\pi}(s) R(s) ds.$ (what it should actually be)\\n\\nThe authors probably took their objective from the work by\\nDegris et al (2012, https://arxiv.org/pdf/1205.4839.pdf);\\nhowever, in Degris'12, $d(s)$ is _not_ the discounted\\nstate visitation distribution. It is the limiting distribution as\\n$t \\\\to \\\\infty$, which is a stationary distribution. When $d^{\\\\pi}(s)$ is stationary,\", \"then_the_two_objectives_become_equivalent\": \"$d(s)$ does not change\\nfrom one time step to the next, so the difference between the objectives\\nwill be just a $1/(1-\\\\gamma)$ constant factor. Putting aside this issue,\\nprobably the limiting distribution formulation is not realistic as most\\nRL researchers consider the episodic setting, so using a discounted\\nstate visitation distribution is probably better. However, the newly\\nproposed policy gradient theorems do not appear sound for the true RL\\nobjective using $R(s)$.\\n\\nNext, they replace the distribution $d^{\\\\pi}(s)$ with a\\ndistribution $d^{\\\\pi_b}(s)$ gathered using a behavioral policy (so they are\\nworking off-policy). However, they do not apply an importance weighting\\ncorrection for the distribution shift, and just ignore the importance\\nweights (Note that this is also done by Silver et al (2014) in deterministic\\npolicy gradients, and by Lillicrap et al (2015) in DDPG, so it is not that\\nstrange per se, as long as it gives better practical performance. However,\\nit should at least be acknowledged that the importance weights are being\\nignored). Note that they still apply an importance weight on the actions\\n($\\\\pi(a|s)/\\\\pi_b(a|s)$) once the state is sampled from the data buffer, however,\\nthis does not correct for the distribution shift from $d^{\\\\pi}$ to $d^{\\\\pi_b}$,\\nso the policy gradient computed using such a method will necessarily be biased.\\nFor example, see the following works for examples that try to deal with the\", \"distribution_shift_problem\": \"Munos et al (2016, https://arxiv.org/abs/1606.02647),\\nWang et al (2016, https://arxiv.org/abs/1611.01224),\\nGruslys et al (2017, https://arxiv.org/abs/1704.04651)\\n\\nPutting aside the issue of whether ignoring the distribution shift is OK,\\nthe main issues are the new policy gradient theorems derived from this\\nformulation. Both the $V(s,\\\\theta)$ as well as $Q(a,s,\\\\theta)$ formulations appear\", \"flawed\": \"In the $V(s,\\\\theta)$ case they propose the policy gradient:\\n\\n$\\\\nabla_\\\\theta J(\\\\theta) = \\\\int d^{\\\\pi_b}(s) dV(s,\\\\theta)/d\\\\theta ~~ds$ in equation 8.\\n\\nHowever, the true policy gradient is:\\n$\\\\nabla_\\\\theta J(\\\\theta) = \\\\int \\\\mu(s) dV(s, \\\\theta)/d\\\\theta ~~ds,$\\nwhere $\\\\mu(s)$ is the start-state distribution. Actually they wrote\\nthis also in equation 7, when they considered the $V(\\\\theta)$ formulation,\\nbut for some reason sampled from $d(s)$ instead for $V(s, \\\\theta)$ when computing\\nthe policy gradient in the $V(s,\\\\theta)$ formulation.\\n\\nIn the $Q(a,s,\\\\theta)$ formulation, they add an extra $dQ/d\\\\theta$ term to\\nthe policy gradient. Their motivation is the following:\\n\\n$\\\\nabla_\\\\theta J(\\\\theta) = \\\\int d^{\\\\pi_b} (dQ(a=\\\\pi(s,\\\\theta),s,\\\\theta)/d\\\\theta) dads$\\n $ \\t= \\\\int d^{\\\\pi_b} dQ(a,s,\\\\theta)/da*da/d\\\\theta + dQ(a,s,\\\\theta)/d\\\\theta~~ dads$\\n\\nHowever, this derivation stems from the flawed definition of J that is\\nnot maximizing the sum of rewards over the trajectory distribution, but\\nmaximizing some other objective that sums the value functions at all states\\nin the trajectory distribution. My strongest argument for why the original\\noff-policy derivations by Degris et al and Silver et al are less flawed is\", \"the_following\": \"If we are on-policy, i.e. $\\\\pi_b = \\\\pi$ and $d^{\\\\pi_b} = d^{\\\\pi}$ we would want\\nthe off-policy policy gradient theorem to be unbiased, hence it should\\nrevert to the standard policy gradient theorem. In the formulations\\nof Degris and Silver, this is indeed the case, and these theorems would\\nbe unbiased in the on-policy setting. The new theorem in the current\\npaper, on the other hand, would have an extra $dQ/d\\\\theta$ term, which would\\nbias the gradient. Therefore, I do not see any good theoretical reason to\\nadd this term. Moreover, the practical performance did not improve, so\\nthere is little evidence to suggest it as a heuristic either.\\n\\nIf someone would say that the original policy gradient\\ntheorem requires the $dQ/d\\\\theta$ term, I would urge them to look at the original\\nproofs---there is no approximation, these theorems are exact for the true\\nRL objective based on maximizing the rewards over the discounted trajectory\\ndistribution. The intuition is that the remaining $dQ/d\\\\theta$ term for the\\nremainder of the trajectory from a time-step t is estimated by summing\\nthe $dQ/da\\\\*da/d\\\\theta$ or $Q\\\\*dlog/d\\\\theta$ terms for all the future time-steps.\\n\\nAnother more minor theoretical issue in the paper is that while the\\ntheory considered the discounted state visitation distribution, the\\ndiscount factors are not added into the policy gradient in the algorithmic\\nsections. This omission is common, and tends to work well as a heuristic\\n(but it should at least be mentioned that such an approximation is made).\", \"see_the_following_papers_for_more_discussion_on_this\": \"Nota and Thomas (2020, https://arxiv.org/abs/1906.07073)\\nThomas (2014, http://proceedings.mlr.press/v32/thomas14.html)\\n\\n\\n5. Ask questions you would like answered by the authors to help you clarify your understanding of the paper and provide the additional evidence you need to be confident in your assessment. \\n\\nHow did the computational times compare? Was there much of an overhead to\\nusing the more complicated critics including theta as an input?\\n\\n\\n6. Provide additional feedback with the aim to improve the paper. Make it clear that these points are here to help, and not necessarily part of your decision assessment.\\n\\nFor me to change my assessment, first the theoretical issues should be\\nfixed or cleared up.\\n\\nNext, I have some possible suggestions:\\n1) Test also $V(s,\\\\theta)$ on LQR as well as on zero-shot learning while sampling\\ns from the initial state distribution $\\\\mu(s)$. This does not require interacting\\nwith the environment (because you never apply any action), and I would consider\\nit fair in terms of comparing to $V(\\\\theta)$. If the learning from the TD error\\nis working well, I would expect it to outperform the $V(\\\\theta)$ formulation\\nin the zero-shot task. \\n2) Test the parameter value functions using the standard policy gradients\\nwithout adding the $dQ/d\\\\theta$ term. Because you are using $Q(a,s,\\\\theta)$, there\\nmay be some learning to generalize across different policies due to the\\ntheta input, so it may outperform the original policy gradients without\\nchanging the policy gradient theorem. Actually, it would have been better to\\nperform such experiments as an ablation study from the beginning anyhow.\\n3) Test $Q(a,s,\\\\theta)$ also on the LQR task to show it's correctness\\n(for example by sampling s from the initial state distribution and computing\\nthe action). It may also be nice to test it in the zero-shot task as well.\\n4) Perhaps test combinations of the various gradients, for example taking\\nthe average of the $V(\\\\theta)$ gradient with the policy gradient using $Q$\\n(i.e. taking the average of two equivalent policy gradients).\\n\\nIf the above points are convincingly done, I may increase to marginal\\naccept. The current contributions are not enough for me to go higher than\", \"that\": \"taking away the proposed new policy gradients, the main contribution\\nis to add $\\\\theta$ as an input to $V$ and $Q$, which I think is not enough.\\nMoreover, the advantage of adding $\\\\theta$ as an input was not shown convincingly\\nusing compelling evidence. Currently the most compelling evidence is the\\nzero-shot task, which shows that there is some generalization happening in\\nthe $\\\\theta$ space; however, what is missing to me, is a demonstration of how\\nthis additional generalization helps in solving the original task in a more\\ndata-efficient manner. Perhaps interleaving the policy search with longer\\nsessions of off-line learning (without any interaction) using $dV/d\\\\theta$\\nto take advantage of the generalization may improve the data-efficiency\\nand show the advantage of the new method (exaplaining good practices on how\\nto do this may be a useful contribution). I think it would also be important\\nto show compelling evidence that including the s input helps in learning\\nbetter $V$ and $Q$ functions. Perhaps there are also other ways to better\\nshow the advantage of the method.\\n\\nAnother option may be to change the problem setup, so that the new policy gradient theorems would be more sound. For example, using the original formulation of Degris'12 where $d^{\\\\pi_b}(s)$ is the limiting distribution as $t \\\\to \\\\infty$ would make the new policy gradients correct; however, the standard setup would not correspond to this. One setup that would correspond to this objective is the following: an infinite horizon continuing setting, where the agent is never reset into the initial distribution, but has to continually change the policy to improve. The learning would iterate between running one behavioral policy until it converges to its stationary distribution, then optimizing a new policy while in the off-policy setting, then switching the behavioral policy to this new policy, and repeating the process. In this situation, $d^{\\\\pi_b}(s)$ can be seen as the initial distribution for the new policy, and in this case the new policy gradient theorems would make sense. My previous argument about wanting the policy gradient theorem to be unbiased in the on-policy case would also be satisfied, because if $d(s)$ is stationary then \\nthe $dQ/da\\\\*da/d\\\\theta$ and $dQ/d\\\\theta$ gradients would differ by only a constant factor.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
w6Vm1Vob0-X | Global Node Attentions via Adaptive Spectral Filters | [
"Shouheng Li",
"Dongwoo Kim",
"Qing Wang"
] | Graph neural networks (GNNs) have been extensively studied for prediction tasks on graphs. Most GNNs assume local homophily, i.e., strong similarities in local neighborhoods. This assumption limits the generalizability of GNNs, which has been demonstrated by recent work on disassortative graphs with weak local homophily. In this paper, we argue that GNN's feature aggregation scheme can be made flexible and adaptive to data without the assumption of local homophily. To demonstrate, we propose a GNN model with a global self-attention mechanism defined using learnable spectral filters, which can attend to any nodes, regardless of distance. We evaluated the proposed model on node classification tasks over six benchmark datasets. The proposed model has been shown to generalize well to both assortative and disassortative graphs. Further, it outperforms all state-of-the-art baselines on disassortative graphs and performs comparably with them on assortative graphs. | [
"Graph Representation learning",
"Graph Convolutional Network",
"Graph Fourier transform"
] | Reject | https://openreview.net/pdf?id=w6Vm1Vob0-X | https://openreview.net/forum?id=w6Vm1Vob0-X | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"oydovObXGUu",
"DXAA_TQ7Ucs",
"0ggdaegLRkU",
"UXBQa8CQ9wa",
"UzOn2CBoLSH",
"KAKIDwV7jA",
"l82AI_FTrTl",
"LmxJfkbq6r",
"PiW8rBImfGT",
"PR5MVY4JryT",
"tZ4mRgfd_Yc"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040372155,
1606287111000,
1606286356870,
1606169089927,
1606145014985,
1606143269160,
1606142260703,
1606141825515,
1603870819837,
1603836644799,
1603603777692
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3425/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3425/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3425/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3425/Area_Chair1"
],
[
"ICLR.cc/2021/Conference/Paper3425/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3425/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3425/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3425/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3425/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3425/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This paper proposes a GNN that uses global attention based on graph wavelet transform for more flexible and data-dependent GNN feature aggregation without the assumption of local homophily.\\n\\nThree reviewers gave conflicting opinions on this paper. The reviewer claiming rejection questioned the novelty of the paper and the complexity of the global attention mentioned in the paper. Even through the authors' responses and subsequent private discussions, concerns about complexity and novelty were not completely resolved.\\n\\n\\nConsidering the authors' claim that the core contribution of this paper is to design fully learnable spectral filters without compromising computational efficiency, it is necessary to consider why it is meaningful to perform global attention based on graph wavelet transform in the first place. In terms of complexity, although the wavelet coefficient can be efficiently calculated using the Chebyshev polynomials mentioned by the authors, in the attention sparsification part, n log n is required **for each node** in sorting, resulting in complexity of n^2 or more. There may still be an advantage of complexity over using global attention in a message-passing architecture, but it will be necessary to clarify and verify that, given that the proposed method uses an approximation that limits global attention within K hops.\\n\\nAlso, this paper modifies the graph wavelet transform in graph theory, which requires a deeper discussion. For example, as the authors mentioned, the original wavelet coefficient psi_uv can be interpreted as the amount of energy that node v has received from node u in its local neighborhood. The psi_uv defined by the learnable filter as shown in Equation 3 has a different meaning from the original wavelet coefficient. There is insufficient insight as to whether it is justifiable to use this value as an attention coefficient.\\n\\nOverall, the paper proposes potentially interesting ideas, but it seems to require further development for publication.\"}",
"{\"title\": \"Response to AnonReviewer1 further comments\", \"comment\": \"Thanks again for your comments. We are happy to discuss the questions further.\\n\\n### Novelty\\n\\nWe agree with you that, when K is large enough, ChevNet is able to incorporate the entire graph. However, as reported by Kipf and Welling in their work for GCN [5], simplified Chebyshev filters by restricting to 1-hop neighborhood (K=1) often provide better performance on graphs than the cases with larger K. Further, we believe ChevNet remains to be a localized method that resembles a low-pass diffusion filter in practice. A graph laplaian $L$ is closely related to random walks, and its power $L^k$ can be seen as a form of the stationary probabilistic state of step-$k$ random walks. A Chebyshev polynomial filter $g_\\\\theta(L)=\\\\sum_{k=0}^{K-1}\\\\theta_kL^k$ is essentially a sum of weighted random walk states where the step $k\\\\in \\\\{0, ..., K-1\\\\}$. Thus, the embedding of a node is a sum of features from its K-hop neighbours multiplied by a $k$-step random-walk probability matrix, and weighted by $\\\\theta_k$. Therefore, Chebyshev polynomial filters work similarly as pagerank kernels as shown by [6] which has also been shown to be closely related to heat kernels[7]. In fact, a recent work has shown Chebyshev polynomial filters are equivalent to graph diffusion (GDC)[8] which is based on the assumption of homophily, i.e. 'birds of a feather flock together', as reported by the authors.\\n\\nIn comparison, our method uses MLP which is not restricted by the aforementioned constraints, and thus is more adaptive. On one hand, we observed MLP converges to similar shapes of a heat kernel, with attentions being allocated to structurally similar nodes in a very similar way as reported by [9] on barbell graphs. On the other hand, on heterophilic networks, MLP emphasizes on mid-high frequency range which contributes the most to performance, which would not be able to achieve using heat or pagerank kernels. We agree with you that the ChebNet can approximate high-frequency filter in some cases. However, there resides two key differences between these two models: \\n* ChevNet aggregates (or normalizes) node features via a spectral filter, whereas our model uses wavelet basis to measure the similarity between nodes via attention (normalization/aggregation follows after this). \\n* Our model uses adaptive filters via multi-head attention, where each head learns attention weights from a different spectral filter, whereas ChevNet uses the `same polynomial filter` over the entire network. Therefore, ChebNet can only use overfitted (if it happens - ChevNet has a very small number of parameters so it is unlikely) filters whereas our model can use appropriate filters, through the standard training-validation process. \\n\\n### Complexity\\nIn our work, Eq.(3) is computed via Chebyshev polynomials, which means $\\\\psi_{u,v}$ for a pair of nodes $(u,v)$ is only computed when they are linked within $p$-hop neighborhood. Combining this with sparse-matrix structure, we can achieve complexity $O(p\\\\times|E|)$ for the attention part. \\n\\nGiven the confusion, we have added some details of the Chebyshev polynomial approximation in Appendix A. Please take a look and let us know if further clarification is needed.\\n\\n\\n[5] Thomas N. Kipf and Max Welling. \\u201cSemi-supervised classification withgraph convolutional networks\\u201d \\\\\\n[6] Fan Chung and Wenbo Zhao. \\u201cPageRank and random walks on graphs\\u201d 2010\\\\\\n[7] Fan Chung. \\u201cThe heat kernel as the pagerank of a graph\\u201d 2007\\\\\\n[8] Johannes Klicpera, Stefan Wei\\u00dfenberger, and Stephan G \\u0308unnemann. \\u201cDif-fusion Improves Graph Learning\\\" 2019\\\\\\n[9] Claire Donnat et al. \\u201cLearning structural node embeddings via diffusionwavelets\\u201d 2018\"}",
"{\"title\": \"Runtime comparison\", \"comment\": \"Thanks for spending time reading the latest revision.\\n\\nFigure 3 was added for the purpose of demonstrating how the two sparsification tricks improve runtime efficiency of the model itself, which is why we didn't compare with other baselines. That said, we acknowledge an empirical comparison with other baselines would be useful. Given the time constraint, we hereby provide a simple comparison limited to two baselines. We will add the other benchmark results in later revisions.\\n\\n| Dataset | GAT | Geom-GCN | GNAN | GNAN-K |\\n| --- | --- | --- | --- | --- |\\n|Cora | 98ms | 172ms | 161ms | 95ms |\\n|Chame. | 71ms | 175ms | 114ms | 65ms|\\n\\nThe above runtime results are the milliseconds per training epoch averaged over 500 epochs. Our model GNAN is slower than GAT but faster than Geom-GCN. \\nNote the reported runtimes of GNAN are larger than the ones in Figure 3 because we disabled optimizations using sparse matrix operations for a fair comparison with the two baselines. GNAN-K has the optimization turned on, where K=5. \\n\\nResults are obtained on GeForce RTX 2080 Ti with 12G of GRAM.\\nFor the baselines, we use the source code from:\\\\\\\\\", \"gat\": \"https://github.com/Diego999/pyGAT\\\\\\\\\", \"geom_gcn\": \"https://github.com/graphdml-uiuc-jlu/geom-gcn\"}",
"{\"title\": \"Further comments\", \"comment\": \"Many thanks for the authors' detailed response. I have a few further questions.\\n## Novelty\\nRegarding the issue of ChevNet, I do not agree with the authors. By setting a large K (actually K does not need to be very large due to the small world property of networks, K:10~20), ChevNet is able to incorporate the entire graph. However, I do not think the argument \\\"all nodes within the hop are also captured, resulting in degraded performance on heterophilic networks\\\" is correct. If it may indeed learn a high pass filter, ChevNet will not perform only smoothing the local nodes and can also emphasize nodes that are far away. The failure of ChevNet is actually over-overparameterization and overfitting. Regarding this point, I still cannot see the clear contribution from non-local attention. \\n\\n## Complexity\\nI do not think the argument of authors is correct. Before the model performs top-K or any other attention sparsification, the model first needs to compute the entire \\\\phi matrix. I did not see any way to make it linear in O(V). Eq.(3) needs to first compute full matrix eigen decomposition, which is with complexity already more than w(|V|^2) itself. Actually, the essence of non-local attention also makes the model unable to leverage efficient matrix eigen decomposition because the non locality means the entire range of eigenvalues are needed. \\n\\n## Datasets\\nI also agree with AC1. The full comparison of the algorithmic complexity should be provided. The complexity of Eq.3 should also be provided. \\n\\nRegarding this, I still do not think the paper achieves the bar of ICLR.\"}",
"{\"title\": \"thanks for the response\", \"comment\": \"Thank you for providing detailed response.\\n\\nR1, could you carefully read the responses by authors?\\n\\nIn the mean time, I have quick question to authors: in figure 3, why did you just evaluate the computational efficiency of your methods? could you provide some comparisons against other baselines to support your statements?\"}",
"{\"title\": \"Response to AnonReviewer #1\", \"comment\": \"We greatly appreciate your helpful comments, and hereby address your concerns as follows:\\n\\n### (1) Novelty\\n\\nThe novelty of our model, besides the global attention you have mentioned, lies in the design of \\\"learning\\\" spectral filters. This learning ability on spectral filters empowers our model to adaptively discover a combination of low-frequency filters and high-frequency filters for learning meaningful node representations, without making any prior assumption on local homophily. This is also our key observation in this work that leads to overcoming the issue that existing GNNs cannot work well over heterophilic networks. In this regard, we do not claim that the novelty of our work is to observe this issue or use attention; instead, the novelty of our work is to solve this issue by designing fully learnable spectral filters without compromising computational efficiency. This is how our work differs from the existing work, including the work in [3] which pre-defines wavelet filters heuristically.\\n\\nSimilarly, for ChevNet [2], it is designed to localize within local node neighbourhoods, where the range of node neighbourhoods is determined by a hyper-parameter that is usually small. While ChevNet can reach far-away nodes using a large $K$, all nodes within the $K$ hop are also captured, resulting in degraded performance on heterophilic networks. Thus, this design restricts the generalizability of ChevNet on graphs when the assumption of local homophily does not hold.\\n\\nIn addition, we would like to thank the reviewer for pointing out the misuse of expressiveness. We have fixed the related sentence in Section 1.\\n\\n\\n### (2) Complexity\\n\\nOur work does not require computation for each node pairs. This is because we compute attention weights based on graph wavelet transform, which is approximated using Chebyshev polynomials (same trick used in [3]). Thus, our model has the complexity $O(m\\\\times |E|)$, where $|E|$ is the number of edges and $m$ is the order of Chebyshev polynomials. For comparison with the other methods targeting heterophilic networks, the complexity of the method in [4] is $O(N^2)$ and in [1] is $O(Nlog(N))$ where $N$ is the number of nodes. For real-world graphs, $|E|$ is often much smaller than $N^2$.\\n\\nRegarding attention sparsification, our model has introduced two sparsification techniques (please refer to the revised paper for technical details in Section 3 and related discussions on experiments in Section 4.2). Nonetheless, attention sparsification does not change the computational complexity of our model since the computational complexity measures the worst-case computational cost, not the actual computational cost. The purpose of sparsifying attentions in our work is to reduce the actual computational cost using a sparse attention matrix. We have added Figure 3 in the revised paper to illustrate how attention sparsification can help reduce the computational cost and thus improve runtime efficiency in our model. \\n\\n\\n### (3) Datasets\\n\\nWe have added experimental results for Chameleon (2,277 nodes and 36,101 edges) in our revised paper. We are also running experiments on another two datasets Actor (7,600 nodes and 33,544 edges) and Squirrel (5,201 nodes and 217,073 edges) from [4] and will report the results once they are available. Also, it might be worth mentioning that, among all the datasets in our experiments including Chameleon, Actor and Squirrel, PubMed is the largest dataset with 19,717 nodes and 44,338 edges from a computational perspective.\\n\\n\\\\\\nWe hope our responses have resolved your questions. Please let us know if you need any further clarification regarding our paper, and we hope you can re-evaluate our paper based on our responses.\\n\\n\\n[1] Meng Liu, Zhengyang Wang, and Shuiwang Ji. \\\"Non-Local Graph Neural Networks\\\", 2020 \\\\\\n[2] Micha \\u0308el Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering\\\" 2016 \\\\\\n[3] Bingbing Xu et al. \\u201cGraph Wavelet Neural Network\\u201d. 2019 \\\\\\n[4] Hongbin Pei et al. \\u201cGeom-GCN: Geometric Graph Convolutional Net-works\\u201d. 2020\"}",
"{\"title\": \"Response to AnonReviewer #2: more datasets and sparsification details\", \"comment\": \"Thank you very much for the constructive feedback. We have revised our paper accordingly. Please find below our responses.\\n\\n### (1) Attention sparsification\\n\\nFollowing your feedback, we have added further technical details (Section 3) and new experimental results (Section 4.2) in our revised paper. To gain a better understanding on how attention sparsity can affect efficiency, we have included two sparsification techniques in our experiments: (a) one is based on a threshold; and (b) the other is based on top-k sorting. As depicted in Figure 3 in the revised paper, both sparsification techniques can significantly improve runtime efficiency.\\n\\n### (2) Datasets\\n\\nIn the revised paper, we have added another larger disassortative network, Chameleon, which has 2,277 nodes and 36,101 edges. \\nOur method GNAN performs the best across all the baselines on this dataset. We are also doing further experiments on other disassortative networks, and will report the results once they are available. \\n\\nConducting experiments on synthetic graphs with a controllable $\\\\beta$ is a great idea. We will look into how such graphs can be constructed and add experimental results later on. In the meantime, if you are aware of any existing work that generates such graphs, please let us know.\\n\\n\\n### (3) Evaluation tasks\\n\\nNode classification is commonly used to evaluate the model performance by state-of-the-art GNN methods. To have a fair and comprehensive comparison with state-of-the-art GNN methods, we thus benchmark our model against these methods on node classification in this work. We appreciate your suggestions to benchmark on other evaluation tasks such as graph reconstruction and link predication, as well as graph classification, and will work on these tasks as a next step.\"}",
"{\"title\": \"Response to AnonReviewer 4: Thank you for the positive feedback\", \"comment\": \"Thank you for the positive comments. We are delighted to see that you are able to understand the paper, even though part of the paper is not in your area of expertise. We believe it is important to make the manuscript accessible to a wide range of researchers in the community who may not necessarily have deep knowledge on the subject. Please do not hesitate to let us know if you have any questions later on.\"}",
"{\"title\": \"This paper proposes a novel method for Graph Neural Networks with adaptive spectral filters that experimentally outeprform other GNN designs and has comparable performance with MLP in graphs having small local homophily.\", \"review\": \"I liked this paper quite a lot. Although this paper does not belong to my area of expertise, I was able to understand the paper clearly because of its lucid exposition. Experimentally, the authors show a novel GNN design with an attention module that has comparable performance to the MLP and outperforms other GNN designs. I believe that this will be a valuable contribution to many practical problems.\\n\\nUnfortunately, this work does not have any theoretical results, and evaluating the experimental results is outside my range of expertise. Therefore I would like to defer this paper to my fellow reviewers.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}",
"{\"title\": \"Official Blind Review #2\", \"review\": \"Main Idea\\n\\nIn this paper, the authors study the problem of GCN for disassortative graphs. The authors proposed the GNAN method to allow attention on distant nodes indeed of limiting to local neighbors. The authors generalized the idea of graph wavelet with MLP to generate the attention score and utilized it to generate multiple attention heads. The authors carried out experiments on several real-world networks (4 assortative and 3 disassortative) with comparison to several state-of-art GCN methods.\", \"strength\": \"The authors study a very interesting problem of GCN/graph embedding or disassortative graphs.\\nThe proposed method is well motivated with solid theoretical motivation from graph wavelets. The proposed model is very intuitive generalization of graph wavelet methods.\\nThe empirical evaluation is very thorough on seven networks with comparison to about 10 baselines of different kinds.\", \"weakness\": \"Though the authors mentioned the use of sparsification of attention for speed-up, however, it mentioned that t is set to zero. It is interesting to see how scalable the proposed method is as it needs to have global attention to possibly all nodes. An empirical comparison of running time would be very helpful.\\n The authors only carry out experiments on three disassortative which are all very small. It would be interesting to see more experiments on disassortative graphs. Alternatively, it would be interesting to have an experiment on synthetic graphs where the \\\\beta can be controlled and varied smoothly to see how it affects the performance of different algorithms.\\nThe authors picked only node classification of evaluation tasks. It is interesting to see how the disassortative could impact other tasks like graph reconstruction and link prediction.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"High complexity and weak experiments\", \"review\": \"This work propose a new GNN architecture to help GNN break its limitation on only working over homophilic networks. The technical is to use introduce graph global attention.\\n\\nI think the paper is written okay. The motivation is clear. The solution is reasonable. However, I have following criticisms:\\n1. This work has limited novelty. Observing that GCN cannot work well over heterophilic networks is not a new idea and observation. Using attention to capture the features from far-away nodes is natural but not novel. I do not think that it is reasonable to argue against other works, e.g. [1] that adopts the above idea by saying they are not expressive enough. Expressiveness sometimes may lead to model overfitting. Actually, ChevNet [2] can also capture far-aways nodes and be expressive enough. Why does it not work well? I guess that it is due to some overfitting issue. Moreover, if I understand it correctly, the limited difference between this work and [3] is most likely the global attention, which has very limited contribution. \\n\\n2. Although the work claims everywhere to tend to decrease the complexity, when computing the global attention, one still needs to do computation for every pair of nodes, which is of course not scalable for even medium-sized graphs. \\n\\n3. The heterophilic networks used for evaluation are very small with only several hundred nodes. Why not try larger ones, say actor, Cham. in [4]? I guess the computational issue comes from the global attention. \\n\\n[1] Non-Local Graph Neural Networks.\\n[2] Convolutional neural networks on graphs with fast localized spectral filtering.\\n[3] Graph wavelet neural network\\n[4] Geom-gcn: Geometric graph convolutional networks.\\n\\n\\n---post-discussion update----\\nI would like to thank the authors for preparing the rebuttal and attending our discussion. However, I still think the complexity is a concern of this work. I do not think that Eq. (3) can be implemented within the complexity that the authors claimed. Moreover, if the authors use another way to compute the attention scores, that way should be very clearly stated instead of written in a different form. Given the high complexity, I cannot clearly see the advantage of this work in comparison to [1], as the non-local attention has been proposed in [1] already.\\n\\n[1] Non-Local Graph Neural Networks.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
cbdp6RLk2r7 | Addressing the Topological Defects of Disentanglement | [
"Diane Bouchacourt",
"Mark Ibrahim",
"Stephane Deny"
] | A core challenge in Machine Learning is to disentangle natural factors of variation in data (e.g. object shape vs pose). A popular approach to disentanglement consists in learning to map each of these factors to distinct subspaces of a model's latent representation. However, this approach has shown limited empirical success to date. Here, we show that this approach to disentanglement introduces topological defects (i.e. discontinuities in the encoder) for a broad family of transformations acting on images ---encompassing simple affine transformations such as rotations and translations. Moreover, motivated by classical results from group representation theory, we propose an alternative, more flexible approach to disentanglement which relies on distributed equivariant operators, potentially acting on the entire latent space. We theoretically and empirically demonstrate the effectiveness of our approach to disentangle affine transformations. Our work lays a theoretical foundation for the recent success of a new generation of models using distributed operators for disentanglement (see Discussion). | [
"Disentanglement",
"Equivariance",
"Topology",
"Representation theory",
"Character theory"
] | Reject | https://openreview.net/pdf?id=cbdp6RLk2r7 | https://openreview.net/forum?id=cbdp6RLk2r7 | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"MaqKF7woen",
"ldfSrvN6wkQ",
"bImjtaN30-8",
"KXrSHYjOigy",
"VmO-Zm7ypeS",
"u303Z8E9R1c",
"F3Nb82_JweX",
"JFX45HKEhFK",
"tGV9ioKlW0N",
"5sI9IoC59mU",
"3NeMPey5HS3",
"OEFMBvLOYax",
"O7lPIOJSc3y",
"3crA7weUMIl",
"HN42rkn06HW",
"MpCSLT84Yk",
"EUDs1-oanjh",
"zJP3HDipisl",
"VPMXquybssI"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040464474,
1606218173673,
1606218002286,
1606133408644,
1606132722870,
1605708275004,
1605708254038,
1605708191739,
1605708164398,
1605708086589,
1605707999510,
1605707922370,
1605707735115,
1605707656651,
1604788004457,
1604415037747,
1603905299191,
1603830312308,
1603280778612
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3423/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3423/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3423/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3423/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3423/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3423/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3423/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3423/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3423/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3423/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3423/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3423/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3423/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3423/AnonReviewer5"
],
[
"ICLR.cc/2021/Conference/Paper3423/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3423/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3423/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3423/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This is a borderline case (quite comparable to the other borderline case in my batch). The paper has received careful reviews and based on my weighting of the different arguments I arrive at an average score between 5.75 and 6.. The authors present some worthwhile ideas related to disentanglement that deserves more attention and that could spark more research in this direction. At the same time, the level of novelty and significance of this work remains a bit limited. Taken together the paper is likely not compelling enough to be among the top papers to be selected for publication at ICLR.\"}",
"{\"title\": \"Response to Rev 2 and 4 (continued)\", \"comment\": \"_2) The hard-coded shift operator proposed only works for cyclic groups_\\n \\nWe agree that the setup presented in Section 4 (Distributed Disentanglement in Practice) only allows to learn (1) cyclic groups and (2) combinations of cyclic groups (e.g. direct product of cyclic groups like for example translations in x and y, or semi-direct product of cyclic groups like for example rotations + translations).\\n \\nHowever, we would like to put this limitation of our work in perspective by re-emphasizing here the main contribution of the paper and the goal of Section 4. The experiments in Section 4 are not the main contribution of the paper. Our main contribution is to prove that traditional disentanglement via subspaces introduces topological defects in the encoder for a large family of transformations---a contribution the reviewers think is valid and important. We then propose to learn transformations with distributed operators in latent space, because it is an alternative to disentanglement which resolves these topological defects while preserving the desiderata of isolating the factors of variation in data. We prove using representation theory that, at least in the case of affine transformations, we can find distributed operators which satisfy the topology of the problem (the shift operator). The only goal of Section 4 is to provide empirical evidence for the theoretical finding that an affine transformation in image space can be mapped to a distributed but not a disentangled operator in latent space. In conclusion, our contribution is not to provide a successful learning procedure for any and all groups, but to warn against using disentangling operators as a general strategy to learn transformations, and to justify theoretically an alternative strategy to disentanglement via distributed operators in latent space to learn these transformations.\\n \\n=> We will remove all ambiguous claims that may at all imply the shift operator could be used to learn all possible groups, e.g. the claim noted by the reviewer \\\"Importantly, the shift operator does not require knowledge of the transformation in advance, only the cycle order of each group\\\". \\n \\n=> We will rewrite Section 4 (Distributed Disentanglement in Practice) as a subsection of Section 3 (Learning Transformations with Distributed Operators as an alternative to Disentanglement), and emphasize that the goal of Section 4 is only to provide empirical evidence that a distributed operator can learn affine transformations, unlike a disentangled operator which suffers from topological defects. We will emphasize in the discussion that the shift operator that we use can only model cyclic groups, or combinations of cyclic groups by stacking operators.\\n \\n How could groups with unknown structure be learned? For the purpose of our demonstration, which is to show that distributed operators is a solution to the aforementioned topological defects of disentanglement, we hard-code the operator in latent space to be the shift operator. However, this rigidity is not necessary and in fact a family of latent operator can be learned jointly with the encoder and decoder weights. This strategy was used successfully in Connor and Rozell for example (https://arxiv.org/abs/1912.02644), allowing them to learn a complex group with multiple subgroups corresponding to the gait of a stick figure. We don't implement this strategy in the present work because we believe that we demonstrate more clearly the advantage of distributed operators by hard-coding the operator to be the distributed and showing that unlike the disentangled operator, this operator is successfully learning the affine transformations.\\n \\n=> We will add this discussion point to the paper.\\n\\n\\n_3) Recapitulating our contributions and thanking the reviewers_ \\n \\nIn summary, we would like to recapitulate our contributions in light of the reviewers comments and of our proposed changes: (1) We show that disentanglement via subspaces introduces topological defects for a broad family of transformations acting on images \\u2014encompassing simple affine transformations such as rotations and translations. (2) These topological defects justify the use of an alternative, more flexible approach to learning transformations via distributed operators allowing the model to be equivariant, and potentially acting on the entire latent space. (3) We theoretically and empirically demonstrate the effectiveness of distributed operators to learn simple affine transformations. Our work provides a theoretical justification for the success of a recent line of empirical work learning complex transformations via distributed operators in latent space.\\n \\nWe thank the reviewers for their deep interest in our work and their sharp comments. We would like to assure them that even after the end of the discussion period, we are committed to improving the paper with any additional suggestions they might have.\"}",
"{\"title\": \"Response to Rev 2 and 4\", \"comment\": \"We thank Rev 2 and 4 for taking the time to thoroughly engage with our work, which helps us clarify our contributions further here and in the paper.\\n\\n_1) The proposed alternative definition of disentanglement is not novel or meaningful_\\n \\nWe understand the concerns of Rev 2 about the novelty and meaningfulness of our definition of disentanglement. We would first like to take a step back and emphasize that the main contribution of our paper is not to propose a new definition of disentanglement. Instead, it is to show that disentanglement introduces topological defects in the encoder for a large family of transformations. We then argue that one can achieve the same *desiderata* as disentanglement ---namely to identify and isolate the transformations present in the data--- by learning to map each of these transformations to a different operator in latent space. As pointed out by the reviewer, this strategy is well known and already used in the literature, and is usually referred to as \\\"learning transformations\\\" (e.g. Connor and Rozell, Dupont et al). However this prior work does not justify theoretically the choice of distributed operators in latent space. The main contribution of our work is to justify *why* distributed operators should be used instead of disentangling operators (=> to avoid topological defects). We believe---as most of the reviewers---that we are the first to provide a theoretical motivation for this choice and that this insight is a valuable contribution on its own. In particular, we note that Reviewer 2 acknowledges the importance of this finding despite the fact that he already knew it himself.\\n\\n=> In order to emphasize our main contribution more clearly, we propose to reorganize the paper as follows:\\n1. Empirical Limitations of Disentanglement (unchanged)\\n2. Topological Defects of Disentanglement (unchanged, current section 3.1 and 3.2)\\n3. Learning Transformations with Distributed Operators as an alternative to Disentanglement (current sections 3.3, 3.4 and 4)\\n- 3.1 Definition of \\u2018Learning Transformations\\u2019 as an alternative to Disentanglement (see definition below)\\n- 3.2 Illustration of this strategy in the simple case of affine transformations: representation theory guarantees the success of the shift operator to learn any affine transformation (unchanged)\\n- 3.3 Illustration of this strategy in the simple case of affine transformations: we verify empirically that we can learn rotations, translations and combinations thereof using the shift operator in latent space, but not the disentangled operator (results unchanged, text reformulated)\\n \\nInstead of redefining disentanglement, we will simply propose in the new section 3 to learn transformations with distributed operators as an alternative to disentanglement (with the same desiderata in mind of isolating the factors of variation in data). We define below what is meant by learning transformations:\\n\\n\\\"Learning transformations consists in finding an invertible encoder $f$ and a family of operators $\\\\phi_k$ in latent space, such that each operator corresponds to a subgroup acting on image space and the resulting model is equivariant to the group of transformations. Learning transformations can either be achieved by (1) hard-coding the operators and learning the encoder/decoder parameters from examples of transformed images (but necessitates a priori knowledge of the transformation) (2) learning jointly the operators and the encoder/decoder parameters (e.g. Connor and Rozell). \\nLearning transformations achieves the same desiderata as disentanglement, namely to isolate factors of variation acting on the data, only in separate operators rather than separated latent dimensions. \\\"\\n\\nWe will explicitly say in the paper that this definition is not new and cite the relevant literature. \\n\\n=> We thank again the reviewer for his relevant comments, his involvement in his review and the discussion, and we hope that the proposed reorganization of the paper clarifies our main contribution and addresses the novelty concerns of Rev 2. We would be happy to use this reformulation of our claims upon acceptance, and we are open to additional suggestions of the reviewers to improve the clarity of our contributions further.\"}",
"{\"title\": \"I agree\", \"comment\": \"Since I asked very similar questions in my review (#4), I agree with reviewer 2 that your response (see https://openreview.net/forum?id=cbdp6RLk2r7¬eId=JFX45HKEhFK above) is not convincing. To know that the group is cyclic with a particular cycle order, one needs a lot of prior information about the task. Moreover, what happens if the group of interest is non-cyclic?\\n\\nI don't necessarily consider this problem as a showstopper for the paper (as reviewer 2 might), but a better discussion of its implications would definitely be a plus.\"}",
"{\"title\": \"Not yet convinced\", \"comment\": \"It's nice to see the authors more fully embracing a representation theoretic perspective in the new version of the paper, but I am still not convinced that the definition of disentangling given is novel, meaningful and precise. Firstly, the notion of \\\"controllable operators\\\" is still quite vague. I don't think there is a mathematical criterion for when an operator is \\\"controllable\\\". Rather it seems to have something to do with the operator being known or computable by the researcher, but it would be strange to say the disentangledness of the representation depends on whether we know/can compute the operators acting on it. A representation could be disentangled without us knowing in what way it is / what the operators are.\\n\\nThe claim that the proposed approach/definition enables learning without knowing the group seems to be based on a confusion between the concepts of (abstract) group and group action. It is stated that \\\"Importantly, the shift operator does not require knowledge of the transformation in advance, only the cycle order of each group\\\". But since the method only works for cyclic groups and we have to know the cycle order, we essentially have to know the group. There is only one cyclic group of order k, up to isomorphism. This same group may act in very different ways on the input space, e.g. by rotations or by cyclic translations. So one could say that the method does not require computing explicitly the representation matrices of the group in the input space, although it does require pairs of inputs related by transformations g acting via that representation. But this isn't really new, and it's unclear to me how this is related to the definition of disentangling.\"}",
"{\"title\": \"Response to Reviewer 2 (continued)\", \"comment\": \"_4) Additional references + Novelty concerns_\\n\\nWe thank the reviewer for these interesting references that we were not familiar with. We agree that we are not the first to propose disentanglement via distributed operators, but we believe that we are the first to define distributed disentanglement formally and clearly oppose it to traditional disentanglement via subspaces. We also believe that we are the first to justify theoretically the use of distributed disentanglement by showing the failure mode of traditional disentanglement using arguments from topology, and that we are the first to motivate the use of the shift operator to deal with affine transformations using arguments from character theory.\\n\\n=>We now cite Memisevic & Hinton, Cohen & Welling, Sohl-Dickstein et al in our paper.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"_1) Unclear definition of disentanglement + identity map would trivially satisfy equivariance_\\n\\nWe thank the reviewer for his insightful remarks which allowed us to conceptually clarify our proposed definition of disentanglement. The reviewer is correct to point out that a consequence of our definition of disentanglement is that the network should be equivariant wrt to some representations of the group acting respectively on the input and latent space. However, our proposed definition goes beyond the simple requirement of equivariance (and we now clarify this definition further in the main text). Indeed, an important additional requirement of our definition is that the operators acting on the latent space should be controllable, in the sense that representations should be manipulable at test time to emulate the transformations learned. This desiderata of controllability can be achieved either by choosing these operators in advance (i.e. hard-coded operators), or by learning their explicit form (see Connor and Rozell for an example of learned operators). In section 4, we show how the operators can be chosen in advance in the case of simple affine transformations acting on the input space. \\n\\n=> We reformulated our definition of disentanglement (Def 1) for conceptual clarity. The definition more clearly connects the notion of an equivariant model with controllable distributed operators with the notion of disentanglement. We also included a more precise definition of equivariance in the main text (Section 3.3). We also included additional background on the use of group theory for describing transformations in Section 3.4. \\n\\n\\n_2) The definition mentions that each subgroup should have its own operator, but since all of them act on the whole subspace this seems to be a trivial constraint. How is it different than having a representation for the entire group and restrict it to each subgroup?_\\n\\nIndeed, if a representation for the entire group is known in advance, it can very well be restricted to subgroups. However, choosing an operator in latent space capturing the entire group structure requires knowledge of the group structure a priori such that we can correctly identify the representation operator for this group (e.g. we derive the representation operator in Appendix D.4 for the discrete finite Special Euclidean Group). In this case, one could then obtain the representation for each element of each subgroup by plugging in adequate values in the variables of the operator. However, this option requires a priori knowledge of the group and its representation, and thus lacks flexibility. Using different operators for each subgroup, stacked together with intermediate layers, as in our stacked shift operator model, one can at the same time (i) control the representation learned for each subgroup after training and (ii) flexibly learn such operators without deriving the form of the operator for the entire group a priori. Note in this case that it is necessary to include intermediate linear layers, as the different operators should not commute in the case of non-commutative groups. \\n\\n\\n_3) I would further note that what is done in practice in the paper is different from this definition, because we have one latent space per operator, not multiple operators acting on the same space._\\n\\nIn the specific scenario we consider, where we choose the operator in advance to be the shift operator, we cannot have all operators corresponding to all affine transformation simultaneously act on the same latent space (otherwise the operators would all map to the same transformation since they all have identical form). This is why we propose an alternative solution consisting in stacking layers, so that each shift operator acts separately on its own latent space, which is learned to map the operator to one of the transformations present in the data. We however want to emphasize that stacking is only one possible implementation of disentanglement consistent with our definition, but in other cases, for example the case where these operators are learned, the operators could all be acting in a common latent space, as in Connor and Rozell for example.\\n\\n=> We added a discussion point about how the operators could be learned in a common latent space, as opposed to be stacked in different layers.\"}",
"{\"title\": \"Response to Reviewer 4 (continued)\", \"comment\": \"_6) A priori knowledge of the transformations is required_\\n\\nWe believe this is partly a misunderstanding and we would like to clarify the assumptions made when using the shift operator. The shift operator is used to represent the action of the group on the latent space. Importantly:\\n\\n* While the shift operator simply computes a shift of the latent space, we use this form of operator to represent *any* finite cyclic group of affine transformations (e.g. either rotation, translation in x, translation in y). The role of the encoder is to construct a latent space where all these transformations (even rotations) can be represented as shifts. We stack shift operators to represent *any* product of such finite cyclic groups (i.e. rotations and translations combined). \\n* In the affine case that we consider, and in the supervised setting, the shift operator does not require knowledge of the transformation in advance, only the cycle order of each group (e.g. number of discrete rotation angles), which is a requirement we relax in the *weakly supervised* setting.\\n* Our study of the character of affine transformations allows us to guarantee that the shift operator respects the character of the transformation, and allows a linear equivariant model to be learned from pairs of examples for any affine transformation (see our training objectives in Eq. 9 and 10). In addition, note that these types of distributed operators have also been shown to work empirically even in the case where character theory does not directly apply, such as or out-of-plane rotation of 3D objects (see Dupont et al.) or when the affine assumption is not made (see Connor et al.). \\n\\n=> We have moved details and explanations about our shift operator from the appendix to the main text. \\n\\n\\n_7) What happens if the latent space implements a group that does not correspond to any symmetry in the data?_\\n\\nIf we understand correctly, the reviewer asks what would happen if the shift operator corresponds to a transformation that is not a symmetry in the dataset. First, note that as we do not \\u201cforce\\u201d the shift operator to represent a specific group, there won\\u2019t be a case where the shift operator is implemented for a specific group (e.g. scaling) but this group is in fact not in the data. We foresee two cases that might represent the issue aforementioned with a learned operator:\\n\\n* One issue that could happen is that the shift operator learns to represent a transformation (e.g. rotations) but not all objects can be rotated without the shape of the objects being modified (e.g. a glass full of water becomes empty when rotated upside down). This issue appears with most equivariant models where context or semantics are not taken into account. We refer the reviewer to this interesting recent work https://arxiv.org/abs/1911.07849 which seems to explore that question, and where the model learns to focus only on relevant transformations. We consider the integration of semantics and context as future work on equivariant models. \\n* The order of the group, decided ahead of time, might not be adapted to the transformations appearing in the data. This would indeed be problematic for the supervised shift operator. However, the weakly supervised shift operator can handle this case, by setting a large enough number of latent transformations (the hyper-parameter K_L in our paper). During learning, the weakly supervised model will use only the needed number of group elements (i.e. the order of the group) among the K_L possible group elements.\"}",
"{\"title\": \"Response to Reviewer 4\", \"comment\": \"_1) Reorganization of the paper_\", \"we_thank_the_reviewer_for_his_important_suggestion_of_reorganization\": \"the solution to the problem of disentanglement was only presented in detail in Appendix C.3.1, although it is one of the main contribution of the paper.\\n\\n=> In the updated version of the paper, we now use the extra page allotted to us to introduce most of the material from Appendix C.3.1 in the main text.\\n\\n\\n\\n_2) References to prior work using distributed operators_\", \"the_references_to_this_prior_work_were_not_missing_from_the_paper_but_were_introduced_in_the_last_paragraph_of_the_discussion\": \"\\\"Finally, our work lays a theoretical foundation for the recent success of a new family of methods that \\u2014instead of enforcing disentangled representations to be restricted to distinct subspaces\\u2014 use operators (hard-coded or learned) acting on the entire latent space (Connor & Rozell, 2020; Connor et al., 2020; Dupont et al., 2020; Giannone et al., 2020; Quessard et al., 2020).\\\"\\n\\n=> To improve the visibility of these references, We add \\\"(see discussion)\\\" in the abstract and intro when we allude to this prior work.\\n\\n\\n_3) The shift operator handles discrete groups only_\\n\\nIndeed, the shift operator we propose only handles cyclic groups with finite order (or a product of such groups). If we were to use the shift operator to model continuous transformations, a discretisation step would be needed indeed. The level of discretisation could be learned as a hyper parameter, which would increase the number of parameters to tune. This method could work if only a subset of all possible continuous transformations appears during training (which is expected using a finite training dataset), but may struggle to generalise to new values of the transformation at test-time. It would be very interesting to investigate this in future work.\", \"the_reviewer_might_also_be_interested_by_these_references\": \"Falorsi et al. proposes an extension of VAE with the reparametrisation trick on Lie algebra of SO(3), and Connor et al. uses the exponential map to model continuous transformations in latent space.\\n\\n=> We thank the reviewer for pointing this out and we now acknowledge this limitation clearly in the main text. \\n\\n\\n_4) Results are better for rotations than translations_\\n\\nWe confirm that the MSE obtained is lower for rotations than for translations. We believe this is due to the fact that there are more overlap between successively translated shapes than between successively rotated shapes, making learning more ambiguous and thus more difficult in the case of translations. This is also the intuition we give in the Appendix B.3 paragraph \\u201cEffect of the number of latent transformations\\u201d to explain why, in the case of translations, the weakly supervised model provides best results using a larger number of latent transformations than the ground-truth order of the group. \\n\\n=> We have emphasized this point in the main text. \\n\\n\\n_5) Show reconstructions and ground-truths_\\n\\n=> We have added appendix Figures 17 and 18 showing pairs of samples and reconstructions by the stacked shift operator model in the cases of (i) translation in both x and y axes and (ii) rotations and translations in both axes. We refer to these Figures in the main text.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"_1) Reliance on the example of the perturbed triangle_\\n\\nOur proof of the topological defects of disentanglement does not rely on this specific example, which is presented only to show the reader an example of topological defect in the specific case of rotation. In the case of rotation, any object that presents a symmetry wrt to rotation will introduce topological defects in the encoder, not just a triangle. Moreover, our proof of theorem 1 does not rely on symmetry arguments at all, but on arguments of topological isomorphisms which are much more general. Our empirical findings also confirm that disentanglement is hard in practice. Please see Appendix C.1 for the full proof of the theorem, that does not rely on the example of the perturbed triangle, nor on any symmetry argument.\\n\\n\\n_2) Using distributed operators in latent space is not novel + comparison of our method with prior work_\\n\\nWe do not claim that the approach to disentanglement by the means of distributed operators is novel. Our work consists in theoretically motivating *why* this approach is more suitable than traditional disentanglement via subspaces, using arguments from topology and representation theory. Our claims in the abstract and discussion are clear: \\n\\\" Our work lays a theoretical foundation for the recent success of a new generation of models using distributed operators for disentanglement.\\\"\\n\\\" our work lays a theoretical foundation for the recent success of a new family of methods that \\u2014instead of enforcing disentangled representations to be restricted to distinct subspaces\\u2014 use operators (hard-coded or learned) acting on the entire latent space (Connor & Rozell, 2020; Connor et al., 2020; Dupont et al., 2020; Giannone et al., 2020; Quessard et al., 2020).\\\"\\nWe do not see value in comparing our results to this prior work, as we believe that these models, similar to the one we introduce in section 4, will also be able to learn the affine transformations we learn in this paper. \\n\\n\\n_3) Comparison to VAE is not appropriate_\\n\\nWe do not compare the performance of our model to VAE models. We do study the failure mode of VAEs and their variants because they are a standard approach to disentanglement in the field. We agree with the reviewer that comparing VAEs with our approach would be unfair because our approach requires supervision via pairs of transformed examples and a choice of operators in latent space. However, the comparison in section 2 and table 2 is with supervised auto-encoders using operators restricted to a subspace. This comparison is fair because it is exactly the same supervised setting that we use in section 4, where we show the merits of using distributed operators in latent space.\\n\\n\\n_4) If I understand correctly, the paper seems to be based on a mischaracterisation of the arguments in [3]_\\n\\nIt is not at all clear to us how our work would be mischaracterizing Higgins et al. arguments. To clarify, our work builds on the definition of disentanglement of Higgins et al. but we extend their work in several ways. First, we show that traditional disentanglement introduces topological defects (i.e. discontinuities in the encoder), even in the case of simple affine transformations. Second, we conceptually reframe disentanglement, allowing equivariant operators to act on the entire latent space, so as to resolve these topological defects. Finally, we show that models equipped with such operators successfully learn to disentangle simple affine transformations.\\n\\n\\n_5) PCA analysis on our approach would present a flat eigenspectrum_\\n\\nIndeed, a PCA analysis of the latent space under our distributed approach to disentanglement would present a flat eigenspectrum. But in our approach, contrarily to the case of VAE where the PCA analysis is applied, we learn to map the transformation present in the data to a known operator acting on the latent space, and so we know how to emulate the transformation learned in latent space, despite the fact that it is distributed. This is not the case for VAEs, where there is no known operator in latent space that is equivariant to the transformation learned. The PCA analysis is thus relevant in the VAE approach, but not in our approach to disentanglement via distributed operators.\\n\\n_6) ReLUs are not differentiable_\\n\\nThe reviewer is correct to point out that RELU networks are not differentiable everywhere, and therefore, as we note in the main text, our proof about the impossibility of obtaining an invariant subspace to a transformation on Euclidean space does not hold for RELU networks. However, our more general Theorem 1 proven in Appendix C.1 does not rely on differentiability but on the continuity of the encoder and thus holds even in the case of RELU networks.\\n\\n\\n_7) Additional Feedback_\\n\\nWe thank the reviewer for the additional feedback that we will use to clarify some aspects of the paper, and for finding a typo in Appendix A.\"}",
"{\"title\": \"Response to Reviewer 1 (continued)\", \"comment\": \"_5) How different transformations impact each other_\\n\\nRegarding how different transformations impact each other, we provide experimental results in Figure 3E that shows exemplar results of our model for a discrete version of the Special Euclidean Group on rotated-translated shapes. Figure 3D shows the case of translation in both x and y axes at the same time. Additionally, appendix Figure 14b shows results of this model on rotated-translated MNIST and Figure 16 shows results on rotated-translated shapes when the semi-direct structure of the group product is not respected. In Table 2, we also report MSE for the stacked shift operator on both datasets. \\n\\n\\nIn addition to these results showing how we can successfully learn a combination of transformations, the paragraph \\u201cInsight from representation theory on the structure of hidden layers\\u201d in the main text describes the theoretical challenges of dealing with rotations and translations happening together. We note that since rotations and translations do not commute, the correct operator cannot be diagonal, otherwise two operators corresponding to two elements would commute. Indeed, as we show in Appendix D.4 the resulting operator for the discrete finite Special Euclidean case has a block matrix form based on representations of both translations and rotations. We do not directly use this form in our model, but instead use the stacked version of the shift operator, with intermediate linear layers in-between diagonal shift operators. However, Appendix D.4 gives us insight about the form that intermediate layers should take after training: we prove that the intermediate layers of our stacked model should have a block-diagonal form. \\n\\n=> We have changed the main text to emphasise these results. We have also added appendix Figures 17 and 18 showing pairs of samples and reconstructions by the stacked shift operator model in the cases of (i) translation in both x and y axes and (ii) rotations and translations in both axes. \\n\\n\\n_6) Complex and real versions of the shift operator_\\n\\nWe also use the real version of the shift operator in Figure 3A, as described in the main text. In all our experiments, we see no difference in the results between the real and complex operators, as predicted by the theory.\\n\\n_7) Dense latent traversals_\\n\\nWe apply the same latent traversal procedure used in popular disentanglement methods (Beta-VAE and CCI-VAE). We further extend the traversal range from [-3, 3] to [-6, 6] to ensure that we capture the full space of possible variation. \\n=>We also now include 3 additional figures in Appendix E.2, using a more dense latent traversal with 50 plots per latent dimension.\\n\\n\\n_8)Rotations are challenging to learn_\\n\\nWe do not entirely rule out the possibility that with a bigger network and many more samples, we could learn rotation with the traditional disentanglement approaches, as acknowledged in the main text. What we do show, however, is that the function learned must be highly discontinuous with arguments from topology. We also propose a new approach to disentanglement which is successful at learning the rotation transformations with as few as 2000 samples, by respecting the topology of the transformation to disentangle.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"_1) Lack of quantitative metrics of disentanglement_\\n\\nWe agree with the reviewer that it is interesting to quantify disentanglement beyond the MSE of the transformed reconstructions. However, traditional metrics of disentanglement (such as mutual information gap) cannot be applied with distributed operators in latent space. Indeed, traditional disentanglement metrics are not appropriate as they describe how well factors of variation are restricted to subspaces---in contrast to our proposed framework using distributed latent operators. To further quantify the evaluation of the shift operator, we thus compute LSBD, a measure of disentanglement appropriate for distributed operators proposed by the recent ICLR submission https://openreview.net/forum?id=YZ-NHPj6c6O. LSBD measures how well latent operators capture each factor variation, allowing us to quantify disentanglement in the setting of distributed operators. \\n\\n=> Using this new metric, we further quantify the advantage of the shift operator with LSBD of 0.0020 versus the disentangled operator with LSBD of 0.0106 for the models in Figure 3.A and 3.B. We now include these results in Section 4.1 and Appendix E.1 of the manuscript. The LSBD measure confirms our existing qualitative results (Figure 3 and Appendix E.2) and quantitative MSE measures in Appendix E.1.\\n\\n\\n_2) Comparison with FactorVAE_\", \"we_already_implemented_three_models_commonly_used_for_disentanglement\": \"VAE, Beta-VAE, and CCI VAE. However, we agree with the reviewer that it would also be interesting to implement FactorVAE as an additional baseline model.\\n\\n=> We are in the process of implementing FactorVAE as an additional baseline model. We commit to add the FactorVAE baseline to the final version of the paper.\\n\\n\\n_3) Hyper-parameter optimization for baseline models_\\n\\nWe agree that optimizing hyper-parameters is important for baseline models. Our hyper-parameter optimization sweeps over both sets of model hyper-parameters used to successfully disentangle factors of variation in the CCI-VAE paper by Burgess et al. Additionally, we sweep over all combinations of model hyper-parameters (beta, latent dimension) and training parameters (learning rates, batch sizes, and random seeds), amounting to a sweep of >1000 models per baseline (see Appendix B.4). We select the model minimizing reconstruction MSE on a held-out validation set separate from the test and training sets.\\n\\n_4) A priori knowledge of the transformations is required_\\n\\nWe believe this is partly a misunderstanding and we would like to clarify the assumptions made when using the shift operator. The shift operator is used to represent the action of the group on the latent space. Importantly:\\n\\n* While the shift operator simply computes a shift of the latent space, we use this form of operator to represent *any* finite cyclic group of affine transformations (e.g. either rotation, translation in x, translation in y). *The role of the encoder is to construct a latent space where all these transformations (even rotations) can be represented as shifts. *We stack shift operators to represent *any* product of such finite cyclic groups (i.e. rotations and translations combined). \\n* In the affine case that we consider, and in the supervised setting, the shift operator does not require knowledge of the transformation in advance, only the cycle order of each group (e.g. number of discrete rotation angles), which is a requirement we relax in the *weakly supervised* setting.\\n* Our study of the character of affine transformations allow us to guarantee that the shift operator respects the character of the transformation, and allows a linear equivariant model to be learned from pairs of examples for any affine transformation (see our training objectives in Eq. 9 and 10). In addition, note that these types of distributed operators have also been shown to work empirically even in the case where character theory does not directly apply, such as or out-of-plane rotation of 3D objects (see Dupont et al.) or when the affine assumption is not made (see Connor et al.). \\n\\n=> We have moved details and explanations about our shift operator from the appendix to the main text.\"}",
"{\"title\": \"Response to Reviewer 5\", \"comment\": \"_1) Definition 1 is unclear_\\n\\n=> We updated our definition of disentanglement in the main text to reflect the comments of the reviewers:\\n\\n\\u201cDefinition 1. A representation is disentangled with respect to a set of transformations, if there is a family of controllable operators, potentially acting on the entire representation, where each operator corresponds to the action of a single transformation and the resulting model is equivariant. \\n\\nThese operators are controllable in the sense that they have an explicit form, thus allowing the user to manipulate the latent representation by applying the operator. This definition, more flexible than traditional disentanglement in the choice of the latent operators, obeys to the same desiderata of identification and isolation of the factors of variations present in the data.\\u201c\\n\\n\\n_2) Reorganisation of the paper by adding background into main_\\n\\nWe thank the reviewer for pointing out the opportunity for additional background. We made several changes to ensure both the idea of equivariance and the use of group theory is clearly introduced for a general ML audience.\\n\\n=>We reformulated Definition 1 without reliance on group theory, added a paragraph introducing equivariance (formally and informally in Section 3.3), and included additional background on the use of group theory for describing transformations in Section 3.4.\\n\\n\\n_3) A priori knowledge of the transformations is required_\\n\\nWe believe there is partly a misunderstanding and we would like to clarify the assumptions made when using the shift operator. The shift operator is used to represent the action of the group on the latent space. Importantly:\\n\\n* While the shift operator simply computes a shift of the latent space, we use this form of operator to represent *any* finite cyclic group of affine transformations (e.g. either rotation, translation in x, translation in y). *The role of the encoder is to construct a latent space where all these transformations (even rotations) can be represented as shifts. *We stack shift operators to represent *any* product of such finite cyclic groups (i.e. rotations and translations combined). \\n* In the affine case that we consider, and in the supervised setting, the shift operator does not require knowledge of the transformation in advance, only the cycle order of each group (e.g. number of discrete rotation angles), which is a requirement we relax in the *weakly supervised* setting.\\n* Our study of the character of affine transformations allow us to guarantee that the shift operator respects the character of the transformation, and allows a linear equivariant model to be learned from pairs of examples for any affine transformation (see our training objectives in Eq. 9 and 10). In addition, note that these types of distributed operators have also been shown to work empirically even in the case where character theory does not directly apply, such as or out-of-plane rotation of 3D objects (see Dupont et al.) or when the affine assumption is not made (see Connor et al.). \\n\\n=> We have moved details and explanations about our shift operator from the appendix to the main text.\"}",
"{\"title\": \"General response to the Reviewers\", \"comment\": \"We thank the reviewers for their time and thoughtful reviews. Their insightful comments helped improve the quality of the paper. All reviewers recognize the importance of the topic we address. Furthermore, reviewers noted the merits of our work by acknowledging the relevance of the topological flaws of disentanglement we uncovered (R1, R4, R2 and R5), the pertinence of our proposed relaxed definition of disentanglement (R1, R4 and R5), and the effectiveness of our shift operator solution for the disentanglement of affine transformations (R1, R2, R4, and R5).\\n\\nThe reviewers had common clarification questions and suggestions to improve the paper. Based on the reviewers\\u2019 comments, we incorporated many suggestions and clarified our contribution in several ways.\\n\\nFirst, we clarified the assumptions of our proposed shift operator. Most important, we would like to clarify here (and have in the text) that our proposed shift operator *does not* require knowledge of the transformation to be learned in advance (R1, R4 and R5). In the case we consider (affine discrete and cyclic transformations), the only knowledge needed in the supervised setting is the cycle order of each group (e.g. number of discrete rotation angles), which is a requirement we relax in the weakly supervised setting. To clarify this point, we have moved details and explanations about our shift operator from the appendix to the main text, and added some clarifications to the main text.\\n\\nSecond, reviewers had excellent suggestions concerning the organization of the paper (R4 and R5), which we have also taken into account and addressed in the resubmission. Third, we improved the clarity of our definition of disentanglement (R2 and R5). \\n\\nFinally, we further describe the connections and differences between our work and existing methods (R1, R2, R3). \\n\\nWe replied to each reviewer\\u2019s questions and comments individually. We uploaded a revised version of our paper, and in our replies to each reviewer we indicate our changes to the paper with the \\u201c=>\\u201d symbol. We are eager to discuss any further concern that the reviewers might have and thank them in advance for their time and consideration.\"}",
"{\"title\": \"Interesting conceptual formulation, not practically developed\", \"review\": \"This paper presents the idea that the current formulation of disentangled latent representations of data that have been presented are implausible in the sense that the factors are often not actually independent and cannot be learned or generated as independent. Instead the authors put forth the idea of transformations of data that are equivariant to the latent space representation as a formulation of disentangled factors. The authors use group theoretical constructs such as shift and rotation operators to show that a latent space representation should be equivariant such transformations. In other words, if a latent space representation is rotated, it should still reconstruct correctly, because the reconstruction loss should be trained on a rotated version of the image.\\n\\nThe key strengths of this paper are the examples that showcase the lack of ability to learn independent latent factors. Figure 1 displays the failure to learn rotation as a factor in the MNIST digit dataset. Figure 2 is even more convincing in that it shows that the orbits of the different factors cannot be mapped to one another and thus cannot be truly independent. \\n\\nSecond, I believe that the idea that is better stated in the introduction on how disentanglement can be framed is valuable: \\u201c In this framework, the factors of variation are different subgroups acting on the dataset, and the goal is to learn representations where separated (of the data) subspaces are equivariant to distinct subgroups.\\u201d Theoretically the authors are proposing an operational view of the latent factors as separate transformations on the data, and the representation as having subspaces equivariant to the transformations. Definition 1 is trying to state the same idea but is much less clear to the average ML reader\\n\\nMore generally, the authors should work harder to communicate this to the ML audience. The group theoretical background from the appendix should be in the background section, particularly the idea of equivariance and group operations. \\n\\nThe key weakness is that their new formulation of disentanglement is that it is definitional does not give a plan of how this should be done. Based on their description it seems as if the dataset has to come with a set of known operations on the data (like rotations) that are equivariant. How would such operations be learned de novo from the data? It seems as if the framework requires learning two things separately 1. A latent representation of the data 2. A set of equivariant operations on the data (that are perhaps cyclic generators of an orbit). It is not clear how this would be learned.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review of \\\"Addressing the Topological Defects of Disentanglement\\\"\", \"review\": \"Summary: The authors proposed a new way to disentangle affine transformations without topological defects. This paper made several theoretical contributions including a new definition of disentanglement and demonstration of the topological defects in existing disentanglement methods. Experimentally, this paper showed how their proposed shift operator model is powerful when dealing with topological defects.\\n\\nDisentanglement is a relatively challenging task due to the lack of clear definition and the lack of a robust evaluation method. The authors did a good job providing new theoretical definitions and providing empirical and qualitative results to support their claims. The main weakness of the paper is the lack of quantitative metrics to evaluate their approach and compare with others. In addition, the model doesn\\u2019t appear to be very flexible as it requires that the transformation is known in advance.\", \"strengths\": [\"Overall, the paper is well written and contains a good review of advances in the theory of disentanglement.\", \"The idea of addressing topological defects for disentanglement appears novel.\", \"Using operators on the entire latent space is a new direction for the study of disentanglement. The authors\\u2019 viewpoint that \\u201cisolating factors of variation\\u201d is different from \\u201cmapping these factors into distinct subspaces\\u201d, and how they propose a new definition based on this viewpoint is interesting.\"], \"weaknesses\": [\"Lack of quantitative evaluation metrics. The MSE in the appendix is not enough for quantifying disentanglement.\", \"Since this paper focuses on disentanglement, at least Factor-VAE, one of the other representative disentanglement VAE models should be considered when doing the model evaluation.\", \"Baseline models should be optimized in a more comprehensive manner (e.g., currently the selection of beta is {4, 10, 100, 1000} and latent dimension is {10, 30}). It\\u2019s unclear whether these models have been well optimized, or what measures are used to optimize the models for this task.\", \"Because the method requires that the transformation is known in advance, this limits the flexibility of the approach.\", \"How different transformations impact each other is not shown experimentally - there is only an example on Fig 3E showing some visual results, but this should be elaborated on further given the goal of the paper.\"], \"minor_points\": [\"The complex version of the shift operator is used. It would be interesting to show another version and their differences.\", \"Latent traversals results appear to be rather sparse. It would be interesting to show how the variation exists inside the model via dense traversals and the computing of generated images variation with different latent traversals.\", \"Rotations may be more challenging to learn. 2000 examples may be insufficient for the model to learn this transformation correctly.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting idea, improper contex\", \"review\": \"**Summarize what the paper claims to contribute.**\\nThe authors claims to show that disentanglement into subspaces by a continuous encoder is impossible for any finite group acting on Euclidean space\\nThe authors claim to introduce an alternative definition of disentanglement that is more flexible and leads to a \\n\\n**Strengths:**\\nThe authors consider the problem of disentangled representation learning which is of considerable interest to the community\\nThe authors approach the problem by imposing structure through their disentangled operators\\n\\n**Weaknesses:**\\nThe reliance of the \\u201cimpossibility of disentanglement\\u201d proof seems to rely heavily on the example of the perturbed triangle. The example and its assumptions seem fairly rigid and unnatural and I am unconvinced this captures the reality of disentangled representation learning with auto-encoding networks.\\nThe approach of adding structure by means of a transformation operator was also used in [1,2] which are cited but not compared against. Instead the authors compare against various VAEs which do not impose any external structure which does not seem particularly appropriate.\\nIf I understand correctly, the paper seems to be based on a mischaracterization of the arguments in [3]\\n\\n**Clearly state your recommendation (accept or reject) with one or two key reasons for this choice.**\\nReject. See weaknesses\\n\\n**Supporting arguments for your recommendation.**\\nWhile the authors tackle an interesting problem and propose an interesting solution, the arguments on which the paper is based seem flawed.\\n\\n**Ask questions you would like answered by the authors to help you clarify your understanding of the paper and provide the additional evidence you need to be confident in your assessment.**\\nAs I understand, the argument is against the utility of the *linear* disentangled representation in [3]. The more flexible definition the authors propose seems quite close to *disentangled representation* in [3], please clarify the difference.\\nMoreover, it seems the authors suggest the definition of disentangled representations proposed in [3] requires that subspaces corresponding to factors of variation are single dimensional (section 2) which is not the case, please clarify.\\nHow does the approach compare against other methods, namely [1,2,4] that use structure to encourage disentangling of the representation?\\nSection 2 asserts that the VAE and its variants do not learn disentangled representations and uses PCA to show this is true. I expect that if this same analysis were used in the structured case, a similar result would be found, in particular, since the rotation matrix interacts with multiple dimensions of the latent code. Perhaps my intuition is incorrect, please clarify.\\n\\n**Provide additional feedback with the aim to improve the paper.**\", \"perhaps_a_rewording_could_clarify\": \"(Supervised Disentanglement) is composed of a 2x2 diagonal block... \\u2192 is a block diagonal matrix with a 2x2 rotation matrix in the upper left block and 1s on the remaining diagonals\\n(just after 11) The authors state that most deep networks are differentiable, my understanding is that the common ReLU networks are not differentiable but subdifferentiable\\n\\n**Possible typos:**\\n(VAE, beta-VAE and CCI-VAE) the \\u201d4s\\u201d \\u2192 the ``4s\\u201d\\n(Dfn of a group; identity element) g_k e_G = e_G g_k = e_G \\u2192 g_k e_G = e_G g_k = g_k\\n\\n**Post rebuttal**\\nI thank the authors and other reviewers for their comments and discussion. While the direction the authors pursue is of unquestionable merit, I remain unconvinced that the work as it stands is sufficiently impactful for this venue. \\n\\n[1] Falorsi, Luca, et al. \\\"Explorations in homeomorphic variational auto-encoding.\\\" arXiv preprint arXiv:1807.04689 (2018).\\n[2] Connor, Marissa, and Christopher Rozell. \\\"Representing Closed Transformation Paths in Encoded Network Latent Space.\\\" AAAI. 2020.\\n[3] Higgins, Irina, et al. \\\"Towards a definition of disentangled representations.\\\" arXiv preprint arXiv:1812.02230 (2018).\\n[4] Cohen, Taco, and Max Welling. \\\"Learning the irreducible representations of commutative lie groups.\\\" International Conference on Machine Learning. 2014.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"A very interesting paper that needs to be restructured before publication.\", \"review\": \"The paper first shows that existing approaches to latent space disentanglement perform poorly when the latent space topology (usually Euclidean) does not match the actual data topology, using rotation equivariance as an example. This analysis culminates in a general impossibility theorem for this type of disentanglement. The authors then propose a relaxed definition of disentanglement and show that it can be realized by means of a shift operator in latent space. Theoretical and empirical results demonstrate the superiority of the new approach. This is a very interesting idea that represents significant progress in an important problem.\\n\\nUnfortunately, the current organization of the paper does not work well: the authors devote too much space (half of the paper!) to the explanation of the problem, and too little (barely one page) to its solution. This leaves the reader with many unanswered questions about how the new method works and what its crucial details are. Some of these questions are later dealt with in the appendix, but this is too late.\\n\\nMy main suggestion for improvement is therefore to move most of section C 3.1 to the main text and allocate the required space by shortening the motivation (up to section 3.2) and possibly the discussion of multiple transformations in section 4.3. Content that would get lost by this change should be moved to the appendix.\", \"more_minor_points_are\": [\"The authors repeatedly refer to \\\"recent success of ... distributed operators\\\", but do not cite and discuss any prior work. Please add appropriate references to the introduction or related work.\", \"Rotations and translations are continuous transformations, whereas the proposed shift operator is discrete. Does this discretization introduce rounding errors or other artifacts? How many discretization levels are needed, and how can this number be determined? Does discretization have undesirable limitations? Such potential limitations should at least be acknowledged. Ideally, these questions should be investigated experimentally (but this can be left for future work if infeasible in the present paper).\", \"In appendix E.1, results for rotations are an order of magnitude better than those for translations. Why is this the case?\", \"Figure 3E: It is hard to judge if the results align with the ground truth. Preferably, the ground truth should be displayed for reference.\", \"Is it necessary to design the network according to a-priory knowledge of the relevant group transformations, or can this be inferred automatically? For example, what happens if the latent space implements a group that does not correspond to any symmetry in the data?\", \"I'm willing to raise my rating if these points (in particular, the relocation of section C 3.1) are suitably addressed in an updated version of the submission.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review\", \"review\": \"This paper studies the notion of disentanglement in a group representation theoretic setting. Disentangling is sometimes conceptualized as mapping distinct factors (e.g. position / orientation) to distinct subspaces. It is shown theoretically that such a naive notion of disentangling is impossible for topological reasons, and this is confirmed empirically. An alternative definition of disentanglement is given, where instead of confining the effect of each transformation to a subspace, an operator is used that acts on the whole latent space (this operator is chosen as a shift operator, which works for cyclic groups). It is shown empirically that an autoencoder with a shift operator in latent space is better able to learn rotations and translations.\\n\\nThe paper does a good job explaining why the naive notion of disentangling leads to topological problems, and convincingly backs this up with experiments as well. The insight is not new to me personally, but I can't find a reference that explains it and I think it is not widely understood, so I consider this an important contribution to the (very muddled) discourse on disentangling.\\n\\nDefinition 1 provides a new definition of disentangling. However, the statement is not very precise, and I am not convinced that it can reasonably be considered as a definition of disentanglement. The definition is: \\n\\n\\\"A representation is said to be disentangled with respect to a particular decomposition of a symmetry group into subgroups, if there is a family of known operators acting on this representation, potentially distributed across the full latent, where each operator is equivariant to the action of a single subgroup.\\\"\\n\\nBased on the rest of the paper, I think this means that we have for each subgroup G_i an operator phi_i(g) acting on the latent space. The definition does not make it clear that we wish the encoder to be equivariant wrt this operator and some operator acting on the input space, but I will assume that is what is meant (otherwise, having an operator acting on the latent space is a rather vacuous requirement on the encoder/representation). The definition does speak of the operator being equivariant, which I will take to mean that it is a group representation, i.e. phi(gg') = phi(g)phi(g'). The operator being distributed I will take to mean that phi(g) can be any linear map, not necessarily acting trivially on a subspace or being (block-) diagonal / reduced. \\n\\nThe definition mentions that each subgroup should have its own operator, but since all of them act on the whole subspace this seems to a trivial constraint. Indeed if we have a representation of the whole group acting on the latent space, simply restricting it to each subgroup gives us a representation of the subgroups. I would further note that what is done in practice in the paper is different from this definition, because we have one latent space per operator, not multiple operators acting on the same space.\\n\\nUnder this interpretation, I don't see how the definition is saying anything else than that the network should be equivariant wrt some representation of the group acting on the input and output space. Although equivariance is a good property for various reasons, it does not seem to me to be reasonable definition of disentangling by itself. Indeed, the identity map satisfies this constraint trivially.\\n\\nIt may be that I have misunderstood definition 1, but this strengthens the case for making it mathematically precise.\\n\\nEven if one can question whether Def 1 is a good formalization of disentangling, the paper does show empirically that it is easier to learn an equivariant encoder/decoder when the latent operator is a shift operator or a diagonalized complex version of it, rather than a disentangled operator (with one 2x2 rotation matrix block and an identity block; fig 3b). Although I don't know if these two approaches have been compared before, several older papers consider similar models to the shift operator model. \\n\\nFor instance, in a sequence of papers Memisevic & Hinton considered factorized RBMs that do something similar. Cohen & Welling described a representation-theoretic version of this model which is very similar what is presented in this paper (at least the linear AE), and also gave a definition of disentangling (under this definition, the complex diagonal shift operator is disentangled while the original shift operator is not). Models with a stack of multiple operators were considered by Sohl-Dickstein et al.\\n\\nIf one wishes to define a notion of disentangling based on subgroups and representations, it may be worth investigating subgroup adapted / Gelfand-Tsetlin bases.\\n\\nIn summary, I think this paper contains several interesting observations and results, and I think the general direction is very interesting and deserves further study. However, I'm not convinced that this paper provides a good definition of disentangling, the experiments although convincing and well executed are restricted to simplified domains, and some of the insights / methods presented in the paper are already present in earlier work. Nevertheless I hope the authors will not be discouraged, and continue to work on this important and fundamental problem using the tools of representation theory.\\n\\nReferences\\nMemisevic & Hinton, Learning to Represent Spatial Transformations\\nwith Factored Higher-Order Boltzmann Machines, 2010\\nSohl-Dickstein, Wang, Olshausen, An unsupervised algorithm for learning Lie group\\ntransformations, 2010\\nCohen & Welling, Learning the Irreducible Representations of Commutative Lie Groups, 2014\\nWakin, Donoho, Choi, Baraniuk, The multiscale structure of non-differentiable image manifolds, 2005\\n\\n----\", \"post_discussion_update\": \"Having read the other reviews, author response and updated paper, I still think this paper is borderline. The insight that disentangling transformations as naively defined is impossible for topological reasons is valid and interesting, but seems to have been already observed by others, e.g. Falorsi et al. Nevertheless the paper does a good job explaining this so it could be useful, as some authors seem to not know about this issue. The definition of disentangling still seems a bit vague to me, and I'm not convinced of practical applicability of the proposed method.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
EVV259WQuFG | Machine Reading Comprehension with Enhanced Linguistic Verifiers | [
"Xianchao Wu"
] | We propose two linguistic verifiers for span-extraction style machine reading comprehension to respectively tackle two challenges: how to evaluate the syntactic completeness of predicted answers and how to utilize the rich context of long documents. Our first verifier rewrites a question through replacing its interrogatives by the predicted answer phrases and then builds a cross-attention scorer between the rewritten question and the segment, so that the answer candidates are scored in a \emph{position-sensitive} context. Our second verifier builds a hierarchical attention network to represent segments in a passage where neighbour segments in long passages are \emph{recurrently connected} and can contribute to current segment-question pair's inference for answerablility classification and boundary determination. We then combine these two verifiers together into a pipeline and apply it to SQuAD2.0, NewsQA and TriviaQA benchmark sets. Our pipeline achieves significantly better improvements of both exact matching and F1 scores than state-of-the-art baselines. | [
"machine reading comprehension",
"BERT",
"linguistic verifiers",
"hierarchical attention networks"
] | Reject | https://openreview.net/pdf?id=EVV259WQuFG | https://openreview.net/forum?id=EVV259WQuFG | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"ZxnDr98V6TT",
"nJeLT-aK9mF",
"jGPnPss_p6z",
"gpu8qbM_jZk",
"p1RyhcvKbpY",
"Jb4GJKX84a2",
"1qjgU3df10",
"aKQHVTKpOcR",
"jMeAGbTm_D3"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040353789,
1605634168845,
1605628296864,
1605608818633,
1605602173603,
1604047180985,
1604034782551,
1603933540945,
1603714561390
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3420/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3420/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3420/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3420/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3420/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3420/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3420/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3420/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"The authors propose two linguistic verifiers for improving extractive question answering when the question is answerable. The first replaces the interrogative in the question with candidate answers and evaluates the result both in isolation and in combination with the answer-containing sentence to do answer verification. The second jointly encodes individual sentences and spans with questions in a hierarchical manner to improve use of context in answer prediction performance.\", \"the_reviews_for_this_paper_are_roughly_on_the_cusp\": [\"2 reviewers rate the paper a bit below the acceptance threshold, 1 a bit above, and then 1 now rates the paper as a solid Accept.\", \"Pros\", \"The main strength of the paper, certainly as emphasized by the most positive reviewer is the strong empirical results. Especially on SQuAD v2, the method here seems to roughly equal the current leading system on the leaderboard.\", \"The paper also proposes two methods for improving question answering that make sense, are relatively simple, and work\", \"Cons\", \"The writing and presentation of the paper is not that great. Even at the level of the introduction, the writing just is not very focused: The first page has a lot of background and tutorial information on MRC that just doesn't get to the point of where this paper is situated and what it contributes.\", \"Neither of the proposed systems are that novel (though it is interesting to see that they still have value even in the age of large contextual language models)\", \"The paper lacks ML novelty\", \"The methods appear to be significantly more expensive to run\", \"Some empirical comparisons appear to be lacking\", \"As well as the missing comparisons mentioned by some reviewers, I think that there are a number of other missing relevant datapoints. While not denying that gathering the available results for NewsQA/TriviaQA is much less straightforward than with that nice leaderboard for SQuAD, aren't there are lot of systems with better results on TriviaQA that aren't mentioned in the paper. These include: RoBERTa and SpanBERT (mandarjoshi); BigBird-ETC see https://proceedings.neurips.cc/paper/2020/file/c8512d142a2d849725f31a9a7a361ab9-Paper.pdf; Longformer; SLQA see https://www.aclweb.org/anthology/P18-1158.pdf .\", \"But, overall, I think the decision on this paper comes down to focus and contributions. Not withstanding the growing size of ICLR, I would like to think that it is not just another ML and ML applications conference, but it is a conference centered on representation learning. The present paper, no matter its quality and strong results, just isn't a contribution to representation learning. It is a much better fit to an NLP conference where it would be a strong contribution to question answering, showing the continuing value of linguistic methods like question rewriting in answer validation. But this just isn't a contribution within the focus of representation learning. Just as R4 does, I encourage the authors to clean up the presentation of the paper a bit and to submit it to an NLP conference, where it would be a strong contribution, for the reasons that R3 emphasizes.\"]}",
"{\"title\": \"We appreciate your time for reviewing this paper and your detailed comments and questions\", \"comment\": \"1. Please allow us to give a brief introduction of the MRC task. For this sentence \\u201cminimizing span losses\\u201d, are \\u201cspan loss, 1\\u201d in Figure 1a, i.e., the start/end positions in a segment (or, paragraph). In current MRC models, one major task is to find the answer text from the segment by given a pair of <question, segment>. Every token in the segment has an index position such as from 0 to 511. In addition, a reference answer in the input is expressed by a pair of start/end index positions, such as [5,8] stands for the 5-th to 8-th words are the answer phrase. We can consequently design binary classification losses by comparing the predicted start/end positions with the reference start/end positions.\\n\\n2. We appreciate if we can be shared with the reference and make a full comparison. Currently, we used HAN and MRC to search in Google, and found some related papers: (1) https://web.stanford.edu/class/archive/cs/cs224n/cs224n.1194/posters/15791528.pdf, (2) https://www.aclweb.org/anthology/P18-1158.pdf (3) https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/viewFile/16331/16177 and (4) https://stefanheinrich.net/files/2019_Alpay_IJCNN.pdf. Not sure if these references are related or not. We had a learn of these four papers and even all of them mentioned HAN and related variants, for different MRC tasks (of multi-choice) or based on different CLMs, with much worse results then our baselines.\\n\\n3. We initially aimed at NLP applications of learning representation algorithms/models. Still, we cautiously appeal our work for the impact of a challenging NLP domain of machine reading comprehension, especially for open domains. MRC is adapted in modern search engines such as Google/Bing/Baidu and MRC is testified to be beneficial to a list of NLP tasks such as NER, text generation, conversations, QA, etc.\\n\\n4. Thanks for the insightful question. The question is also related to the ablation test of the effectiveness of loss 4 which is using CLMs to evaluate the rewritten question. [please also refer to our comment 1 and \\u201cAnswer to Q2\\u201d to \\u201cAnonReviewer1\\u201d ] We compared the suggested left-hand-side append without replacing the interrogatives and found that the result dropped significantly due to several facts: (1) most answers are phrases with multiple words and their boundaries are more difficult to be scored if we simply attach them to the head of the question: the context of the question will be less helpful to detect missing or duplicating of words in the answer phrase; (2) answer phrases make it more difficult for the cross-attention loss to find dependency relations of between the interrogative words and the answer phrases. Also note that only around 2% questions were failed of alignment and attached the predicted answers in the left-hand-side.\\n\\n5. Please also refer to our \\u201canswer 3 for AnonReviewer 2\\u201d for time complexity reports. Also, for the increasing of parameters, please refer to our \\u201cAnswer to Q4 to AnonReviewer1\\u201d. Due to the recurrent networks of bidirectional GRUs are causal and difficult to be computed in parallel, the time complexity for both training and testing was increased largely. In addition, we tried one-layer self-attention in stead of GRU, yet the results were significantly worse than that of using GRU. Thus, we reported results on GRU instead of one-layer self-attention. In the future, it will be reasonable to consider transformer\\u2019s multi-layer sequence modeling methods or their variants to further decrease the time complexity.\"}",
"{\"title\": \"Thank you so much for the precious and constructive questions and comments\", \"comment\": \"1. Sorry for the ambiguity. L3 (loss 3, we will use L1-5 to express losses 1 to 5) to L5 are from small to big: L3 (phrase-level) for the answer phrase LM score, L4 (sentence-level) for the rewritten question LM score and L5 (multi-sentence-level) for the cross-attention loss of between a segment and the rewritten question. Our current ablation tests show that their contribution to the final performance roughly aligns with their granularity: L3<L4<L5. For sure, detailed scores will be included in a future submission.\\n2. We submitted our models un-openly to SQuAD2.0 and the best model achieved EM/F1 (%) of 88.631/93.245 of answerable questions, 92.821/92.821 of unanswerable questions and finally 90.679/93.038 of the whole test set. The EM score is slightly worse than the top-1 result of 90.724 in the Leaderboard (2020/Nov/17), yet the F1 score is slightly better than the top-1 result of 93.011. [These results are just for reference. We totally understand the policy of reviewing and these results may not necessary be taken into consideration for paper evaluation].The code is under cleaning and we are planning to release the code in a near future.\\n3. Yes, we initially aimed at verifying the syntactic correctness of answers for answerable questions. Our initial motivation was to help solving the open-domain QA applications: when a search engine received a question, first applying information retrieval methods to constraint the candidate passages and then use machine reading modules to further detect the exact phrase as the answer. When we have this first IR and then MRC pipeline of open-domain MRC, we would like to appeal that: (1) supplying complete and linguistically correct answer phrases by retrieving and reasoning from a list of candidate passages/documents is as important as (2) judging a question to be unanswerable under a given passage. This is because, we will have quite many candidate passages for comparison: a question not answerable in one passage do not necessarily mean it can not be answerable in another passage of different document. We also notice that MRC models are applied to Google/Bing for deep question-answering. Furthermore, based on our query log analysis: (1) the amount of answerable questions is significantly larger than that of unanswerable questions and (2) disambiguating plausible answers is more challenge than judging a question to be unanswerable. Intuitively speaking, we are intended to obtain answers or unknown knowledge when we submit a question-style query to IR. In addition, we possibly need a reason when a question is judged to be unanswerable. So, hopefully, the explainable classification of unanswerable questions will be a helpful direction.\\nFinally, even we did not design it intentionally, in Table 3, the NoAns\\u2019s EM/F1 are still comparable (92.6 vs. 92.4) to that of Retro-Reader which is specially designed for NoAns.\", \"answer_to_q1\": \"As far as we know, most baselines simply segment by fixed token length regardless of the sentence completeness. In our code, when the segmentation position is in the middle of a sentence, we choose to put that full sentence into the next segment. Thus, each segment in our input only contains complete sentences.\", \"answer_to_q2\": \"Appreciate for this insightful question. When we simply attach the predicted answer text to the question, it does break the independent sentence structure of the original question and yields a worse loss of L4. On the other hand, our cross-attention loss L5, can help alleviating this, which is designed to give a guidance of the dependency of between the words in the rewritten question and the segment. We investigated this and find the results were almost the same, also considering that only 2% questions in total have this alignment issue.\", \"answer_to_q3\": \"Appreciate! Please also refer to our first answer, and yes we do need to append the detailed ablation results.\", \"answer_to_q4\": \"Yes, we employed the same ALBERT-xxlarge as the unique tunable CLM for both sentence-level and segment-level representative learning.\\nIn QRV, only O(1) parameters for L3 and L4. For L5, we used a revised one-layer multi-head attention layer same to (Zhang et al., 2020b) with 4 Linear networks and 4*(512 * 512+512)=1,050,624 parameters, where 512 * 512 for weight w and 512 for bias b. In HAN, we additionally have two GRU RNN for sentence-level and segment-level recurrent sequence learning, each with max seq_len=64 and hidden vector dim=512, so it is ((512+64)*64+64+64) * 3 * 2=221,952 parameters. In addition, we tried one-layer self-attention in stead of GRU, yet the results were significantly worse than that of using GRU. Thus, we reported results on GRU instead of one-layer self-attention. Multi-layer self-attentions are to be testified in the future. In the \\\"combination\\\", we additionally have 536,976 parameters (https://arxiv.org/pdf/2004.07067.pdf Figure 3).\\n\\nIn addition, we will recheck \\\\citep and \\\\citet, appreciate for your time.\"}",
"{\"title\": \"Express our appreciation for your reviewing and your comments\", \"comment\": \"Express our appreciation for your reviewing and your comments.\\n1. For the TriviaQA dataset, please also refer to our answer 4 for \\u201cAnonReviewer3\\u201d. Allow we rewrite some sentences here. After receiving this insightful question on more stronger baselines, especially the GPT-3, T5, and the RAG: \\u201cRetrieval-Augmented Generation for Knowledge-Intensive NLP Tasks\\u201d (https://arxiv.org/pdf/2005.11401.pdf) (2020/May/22), in its Table 1, TriviaQA results were 56.1/68.0. Another is GPT-3 (https://arxiv.org/pdf/2005.14165v4.pdf), in its Table 3 (F1 scores), we can see that our results are better than that of T5 and GPT-3 Zero-Shot, yet worse than RAG (68.0) or GPT-3 one-shot/few-shot (68.0/71.2). For sure, we will include these baselines in a future submission of this paper. Due to the fact that T5 (and C4 dataset used in T5), BART-Large in RAG together with the whole Wikipedia as a reference, and GPT-3 used extremely larger data/parameters than ALBERT-xxlarge that we used as pre-trained models, these comparisons also reflects that our proposed models on TriviaQA are comparable to some of them.\\nThus, we cautiously appeal that a direct comparison with GPT-3 with 175 billion parameters on 570GB datasets +10-million-USD level cost is possibly less-fair to us. For the RAG baseline, we notice that it also used the whole Wikipedia as the reference knowledge while TriviaQA\\u2019s Wikipedia portion datasets are used to testify the performance on information retrieval as well. Generally, we do agree that RAG and GPT-3 with few shot learning have achieved state-of-the-art results on TriviaQA. We would like to include them as reference baselines.\\n2. Actually, by checking URL of the leaderboard, https://rajpurkar.github.io/SQuAD-explorer/, some information is also missing there which caused the ambiguity: (1) \\u201cALBERT+DA Verifier\\u201d stands for EM=87.847 (87.8 listed in this paper) and F1=91.265 (91.3 listed in this paper). The group is from \\u201cCloudWalk\\u201d and all the information we know is its name \\u201cALBERT+Entailment DA Verifier (single model)\\u201d. (2) \\u201cALBERT+verifier\\u201d stands for EM=88.434 (88.4 listed in this paper) and F1=90.918 (91.0 listed in this paper), when we submit this paper, its name was simply \\u201cALBERT+verifier\\u201d by the \\u201cQIANXIN\\u201d group, currently its name is \\u201caanet_v2.0 (single model)\\u201d. (3) SA-NET on Albert\\u201d stands for top-1\\u2019s \\u201cSA-NET on Albert (ensemble)\\u201d with EM=90.724 (90.7 listed in this paper) and F1=93.011 (93.0 listed in this paper) from group \\u201cQIANXIN\\u201d. The name is all we know since no attached papers for them. (4) Retro-Reader online (Zhang et al., 2020b) for \\u201cRetro-Reader (ensemble)\\u201d from \\u201cShanghai Jiao Tong University\\u201d with EM=90.578 (90.6 listed in this paper) and F1=92.978 (93.0 listed in this paper). \\n3. Thanks for the insightful question and we do agree that time comparison for both training and testing should be compared. For comparing with outside baselines, one difficult is that due to the different usage of GPU/TPU hardware and the total number of parameters, we can actually hardly find comparable time-costing information from them, or the time-cost/power-cost such as GPT-3\\u2019s reported numbers of month-level training with thousands of NVIDIA V100 cards. For comparing with in-house baselines, such as with ALBERT models without these two verifiers, we would like to briefly share it (even we know after the first submission, these scores need not be taken into account for evaluating the paper): for the first question-rewriting verifier, since we used an external POS-tagger and additionally computed three losses, the time cost for pre-processing the SQuAD2.0, newsQA and TriviaQA cost us one-day around which is totally not necessary in the baselines. After data preprocessing, the training time increased generally and averagely from 95 hours to 124 hours (+30.5%) under a NVIDIA V100 32GB GPU card. The training was mainly for fine-tuning the parameters in ALBERT and in the verifiers. The testing on the dev/test sets also increased at around 22.2% of from 45 minutes to 55 minutes. For the second HAN verifier, the major additional costs are the recurrent layers of sentence level sequences and segment level sequences. Due to the fact that RNN models are difficult to be trained in a non-auto-regression way, the time cost averagely was increased to 160 hours around (+68.4%) for training/fine-tuning and to 83 minutes for testing (+84.4%). \\n4. Thanks for the comment. \\u201canswer prediction and verification loss\\u201d in Figure 1(b), after H, is actually the same (or part of) with that described in Figure 1(a). Notice that in Figure 1(a), there is a \\u201cH tensor (batch, |q|+|p|+3, h)\\u201d layer, actually Figure 1(b)\\u2019s not drawn part is same with that in Figure 1(a)\\u2019s \\u201cpredicted answer span\\u201d and \\u201canswerability\\u201d together with span loss and classification loss, respectively. In order to make the figures to be compact, we did not draw them explicitly in Figure 1(b) and for sure we can add them up in a future submission of this paper.\"}",
"{\"title\": \"Appreciate for your insightful comments and questions\", \"comment\": \"Sincerely appreciate your time and your comments and deeply sorry for a late reply. Here are the answers to these precious questions/comments:\\n1. In table 2, \\u201cRegular Track\\u201d stands for the category of the results reported from the reference papers. On the other hand, we also list the results from \\u201cTop results on the leaderboard\\u201d (https://rajpurkar.github.io/SQuAD-explorer/) here since (1) not every result reported in the leaderboard has an attached paper and (2) not every reference paper has submitted their results directly to the leaderboard, and (3) some papers have different results in their paper compared with the results listed in leaderboard (such as the Retro-Reader (Zhang et al., 2020b) reported the best EM/F1 of 88.1/91.4 in their paper while 90.6/93.0 in the leaderboard). In order to distinguish the results, we here separately list them: \\u201cRegular Track\\u201d for papers\\u2019 reported results and \\u201cTop results on the leaderboard\\u201d for the results submitted to leaderboard. Specially, when the results in leaderboard align with their papers\\u2019 results, we also include these baselines in \\u201ctop results on the leaderboard\\u201d. \\n2. In Table 2, \\u201cQuestion-rewritten verifier\\u201d and \\u201cHAN verifier\\u201d stand for single models, and \\u201cCombination\\u201d stands for ensemble model by employing these two single models and using an ensemble method by referring to (El-Geish, 2020). In addition, quite same with Table 3, Table 4, Table 5, and Table 6: only \\u201cCombination\\u201d rows are ensemble models (i.e., ensemble of the two verifier models), and other rows (of our results) are single models.\\n3. Please also refer to the answer to Question 2: in Tables 2,3,4,5,6, only \\u201ccombination\\u201d rows are ensemble models\\u2019 results and other rows (of our results) are single models\\u2019 results.\\n4. Appreciate for the insightful question! We just noticed that for SQuAD2.0 and NewsQA, we included ALBERT-xxlarge enhanced baselines, yet for the TriviaQA dataset, the best baselines that we listed here are only BERT-large which makes the result less comparable. After receiving this insightful question, we further investigated two strong baseline papers, one is \\u201cRetrieval-Augmented Generation for Knowledge-Intensive NLP Tasks\\u201d (https://arxiv.org/pdf/2005.11401.pdf) (open at 2020/May/22), in its Table 1, the TQA=TriviaQA results were 56.1/68.0. Another is the famous GPT-3 (https://arxiv.org/pdf/2005.14165v4.pdf), in its Table 3 (F1 scores), we can see that our results are better than that of T5 and GPT-3 Zero-Shot, yet worse than RAG (68.0) or GPT-3 one-shot/few-shot (68.0/71.2). We will include these baselines in a future version of this paper. Due to the fact that T5 and GPT-3 used extremely larger data than ALBERT-xxlarge, these comparisons also reflects that our proposed models on TriviaQA are comparable to some of them.\\nIn addition, we tried hard and could not find a published baseline with ALBERT-xxlarge exactly on TriviaQA. Even we know that new results should not be taken into consideration for evaluating this paper, we still run the experiments of re-implementing the baselines (not new results of our systems) necessary for fair-comparison. We thus run by ourselves two experiments: (1) Google\\u2019s implementation of ALBERT-xxlarge-v2 with TriviaQA and (2) the Retro-Reader (Zhang et al.,2020b) which has exactly the same ALBERT-xxlarge-v2 on the TriviaQA wiki datasets. The results for (1) are EM/F1 (%) of 59.2/64.8 which are comparable to our \\u201cQuestion-rewritten verifier\\u201d, and (2) Retro-Reader achieved EM/F1 (%) of 60.3/64.8 which are comparable to our individual \\u201cHAN verifier\\u201d. Based on these, we still would like to argue the HAN verifier is meaningful and significantly better (p<0.05, tested using the same method in Retro-Reader (Zhang et al., 2020)) than the dry-run of ALBERT-xxlarge-v2 and the \\u201ccombination\\u201d was even better. For sure, we will report these results in a future submission of this paper.\\n5. Sorry to say Figure 1 is true ugly and we will try to rewrite by using different fonts and colors. \\n6. Yes, and for sure and we wish to not let you down (and, sorry to say, we could not beat the top-1\\\"SA-NET on Albert (ensemble) by QIANXIN\\\", even quite close. Our results were scheduled to be added to the leaderboard after the anonymity period -- even we are still retraining/fine-tuning it -- hopefully it will be better. For detailed EM/F1 numbers please kindly refer to our comment 2 to \\\"AnnoReviewer 1\\\"). In addition, we are cleaning the code and will release it in a near future. [for sure, we do understand the reviewing policy and these results may not necessary be taken into consideration. Just for reference.]\"}",
"{\"title\": \"New modules with strong empirical results on MRC\", \"review\": \"In this paper, two linguistic verifiers are proposed to improve the model performance on machine reading comprehension datasets, such as SQuAD v2, NewsQA and TriviaQA. The first verifier rewrites the question by replacing its interrogatives with the predicted answer phrases. Then it computes a score between the rewritten question and the context, so that the answer candidates are position-sensitive. The second verifier leverages a hierarchical attention network, so that the long context can be split in to shorter segments, which are then recurrently connected to conduct answerability classification and boundary determination.\\n\\nThe Empirical results of this proposed method is very strong. Apparently, it achieves a new state-of-the-art performance on the dev set of SQuAD v2. It also outperforms the a bunch of strong baseline methods on the NewsQA dataset. Finally, the proposed model also exceeds the BERT model on TriviaQA.\\n\\nOverall, it is a good paper.\\n\\nHowever, I have some comments:\\n\\n1. In table 2, what does \\u201cRegular Track\\u201d mean? \\n\\n2. In your tables, could you separate the ensemble methods and the single models? It would be much easier to draw a fair comparison.\\n\\n3. Are you results achieved by ensemble or a single model?\\n\\n4. You used Albert-xxlarge, but some methods in the tables used smaller pretrained models. For example, in TriviaQA, your baseline is BERT-Large. It might be a bit hard to tell if the improvement is obtained by a better CLM or the proposed modules. So could you do a fair ablation study to verify this?\\n\\n5. The font of figure 1 looks weird and a bit ugly. I suggest the authors make it more reader friendly. \\n\\n6. Will you submit your model to the SQuAD v2 leaderboard? I am very interested in seeing its performance on the test set. And I am willing to raise my score if the result aligns with that of dev set (I expect it will top the leaderboard).\\n\\n***********************************\", \"post_rebuttal\": \"The author has addressed most of my questions, and the SQuAD v2 test result is on par with the state-of-the-art, partially indicating the proposed method is effective. So I am happy to increase my rating and champion for the acceptance.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes two types of linguistic verifiers for machine reading comprehension task in span extraction form. One is a rewritten question oriented verifier that checks the linguistic correctness of the extracted answers, and the other is based on a hierarchical attention network for answerability classification and boundary determination. The two verifiers are trained independently and then combined together via interpolation. Overall, the paper is well organized and easy to follow.\", \"reasons_to_accept_the_paper\": \"1. The rewritten question oriented verifier could improve the linguistic correctness of the extracted answers.\\n2. The HAN-based verifier considers the entire document instead of each segment independently, which may enable general transformer-based models to handle long-text document.\", \"reasons_to_reject_the_paper\": \"1. Some important baselines are not included in the experimental analysis, such as GPT-3 Few-Shot [1] on TriviaQA which achieves 71.2, and RAG [2] on TriviaQA which achieves 68.0.\\n2. Some important details are missing in the experimental analysis. For example, in Table 2, it is not clear what \\\"DA Verifier\\\" means, and which verifier is used in the method \\\"ALBERT + verifier\\\".\\n3. It is not clearly discussed the additional computational time and cost spent to train the two proposed verifiers, compared to the baseline without verifiers. For a new method that has marginal performance gain, the extra computational cost should be considered.\\n4. The illustration of HAN-based verifier in Fig. 1(b) is not complete, which should have included the part for answer prediction and verification loss, etc.\\n\\n\\nReferences\\n\\n[1] Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., ... & Agarwal, S. (2020). Language models are few-shot learners. arXiv preprint arXiv:2005.14165.\\n\\n[2] Lewis, P., Perez, E., Piktus, A., Petroni, F., Karpukhin, V., Goyal, N., ... & Riedel, S. (2020). Retrieval-augmented generation for knowledge-intensive nlp tasks. arXiv preprint arXiv:2005.11401.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Good results, demonstrates the value of explicit verification/hierarchies in DL, lower novelty wrt ML elements, manuscript needs significant revision\", \"review\": \"MACHINE READING COMPREHENSION WITH ENHANCED LINGUISTIC VERIFIERS\\n\\nThe authors propose two linguistic verifiers for improving extractive question answering performance when the question is answerable. The first replaces interrogatives in the question (who etc.) with candidate answers and evaluates this both in isolation and in combination with the answer-containing sentence to do answer verification. The second verifier jointly encodes individual sentences and spans with questions in a hierarchical manner to improve answer prediction performance. Solid gains on Squad, NewsQA, and TriviaQA are reported for both methods when applied in isolation, and in combination.\", \"strengths\": [\"The techniques are sound and lead to solid gains on 3 benchmark datasets.\", \"The approaches, while relatively straightforward, illustrate that explicit verification and hierarchical evaluation continue to improve application results, despite the high capacity and efficacy of the SOTA deep architectures.\"], \"limitations\": [\"The paper is understandable but the presentation could be significantly improved. Figure 1a for example, is a bit overwelming, and should probably be replaced with something more focused, and moved into supplementary material. Several sentences I couldn't understand, for example \\\"Minimizing span losses of start and end positions of answers for answerable questions is overwhelming in current pretraining+fine-tuning frameworks.\\\" Overall I feel that the paper could use some additional polishing.\", \"A similar hierarchical (HAN) approach was previously proposed for verifying unanswerable questions, but their approach for answerable questions appears to be more effective.\", \"The paper has lower novelty wrt ML elements. The component architectures/models that make up their system are well established.\", \"The replacing of interrogatives with the answer and the associated rules for doing so feel like they have somewhat limited scope (e.g. factoid questions, single interrogative questions, etc.). When there is more than one interrogative, the authors back off to simply appending the answer to the question... perhaps this can be done all the time without compromising the performance gains?\", \"Verification (esp. for the HAN verifier, where extra forward passes are done for each sentence and sub-paragraph) is more more computationally demanding, but this is not discussed.\"], \"overall_assessment\": \"A solid applications paper on extractive question answering. However, I feel that the paper is perhaps better suited for an NLP-application focused audience (e.g. NAACL, deadline approaching), since the results are strong, but the paper has lower novelty wrt core ML. Furthermore, the manuscript is in need of significant revision before it can be considered for acceptance at ICLR.\\n\\nquality 5/10 (+results on multiple benchmark datasets, -manuscript needs substantial revision)\\nclarity 5/10 (+understandable for the most part, -manuscript/figures not clear in many places)\\noriginality 6/10 (+novel approaches to QA verification, -lower novelty wrt ML elements)\\nsignificance 6/10 (+strong QA results, +demonstrates value in explicit verification/hierarchical processing in DL applications, -perhaps more suitable for an NLP-applications focused audience)\\noverall (5)\", \"post_rebuttal\": \"Authors, thank you for your feedback. The additional results around relative speed and performance have strengthened the paper. However, I still feel that the paper still needs significant polishing before final publication (figures, grammar, presentation), and that the paper is better suited for an NLP-focused conference, and so I have not updated my final score.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Official Blind Review #1\", \"review\": \"This work addresses two main challenges of span-extraction style machine reading comprehension (MRC) tasks: how to evaluate the syntactic completeness of predicted answers and how to utilize the rich context of long documents. To handle such challenges, Question Rewritten Verifier (QRV) and Hierarchical Attention Network (HAN) are proposed respectively. The former uses a question-rewritten method to replace the interrogatives in the question with the answer span for further verification. The latter adopts a hierarchical multi-granularity (sentence, segment, and paragraph) cross-attention to extract and utilize the context information of long paragraphs. Compared with the strong baselines, both the verifiers and their combination achieved relatively significant accuracy improvement on three mainstream span-extraction MRC tasks: SQuAD2.0, NewsQA, and TriviaQA.\\n\\n-------------------------------------------\", \"strengths\": \"1. The idea of bringing the answer back to the question for further validation is sound and it is reasonable for humans to do this process to verify the candidate answer in real-world practice. \\n\\n2. The question rewritten strategy is simple and effective, which brings improvements. HAN also handles the problem of long sequence well.\\n\\n3. The overall method achieves state-of-the-art results. The significance test shows significant improvements over baselines.\\n\\n-------------------------------------------\", \"weaknesses\": \"1. The design of the training target (loss) in QRV is complex and not interpretable enough. There are many loss functions. How about their contributions to the final performance?\\n\\n2. There is no test result reported for SQuAD2.0, though it is possible to obtain the results without making it public. Therefore, the clarity, \\u201cDue to anonymous issues, we have not submitted our results in an anonymous way to obtain results on the hidden test set.\\u201d, is not quite convincing.\\n\\n3. The improvement of accuracy is mainly reflected in the questions of HasAns, which has no obvious contribution to the recognition accuracy of NoAns, which is one of the main challenges of the current MRC tasks.\\n\\n-------------------------------------------\", \"questions\": \"1. (Section 1 page 2 line 26) The paragraphs are divided into segments, with fixed length (e.g.,512 tokens with strides such as 128 or 256) and then divides the segment into sentences. So when dividing the paragraph, what if the dividing point is in the middle of a sentence? Would the incomplete sentence be discard\\uff1fIf not, how to further divide the segment to sentence level? Further clarification of the process would be beneficial.\\n\\n2. (Section 2.1 page 3 line 8) When failing to find the alignment, the answer text is attached at the left-hand side of the question. It obviously damages the sentence structure. So will this affect the judgment of the model in the following process? In another word, would it have an impact on the performance of the final model (Increase or decrease) if question-written including subsequent loss calculations were not done on such questions?\\n\\n3. (Section 2.2 page 4 line 17) Multiple losses are employed, but the paper did not distinguish the practical effectiveness of each loss. My concern is whether each of the objectives is necessary since the experiment results in Table 3 has verified that $l\\u2019_{3}$ does not significantly improve the accuracy of the model. Would the authors further verify the contribution of other losses to model performance (except for apparently indispensable l1 and l2)?\\n\\n4. (Figure 1 (b)) Do the tunable CLMs of sentence-level and segment level share parameters? Besides, may the authors list the number of parameters of each model (QRV, HAN, and Combination)?\\n\\n----------------------------------\", \"minor_issues\": \"The citation format is not consistent, please check the usage of \\\\citep{} and \\\\citet{}.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
t86MwoUCCNe | New Bounds For Distributed Mean Estimation and Variance Reduction | [
"Peter Davies",
"Vijaykrishna Gurunanthan",
"Niusha Moshrefi",
"Saleh Ashkboos",
"Dan Alistarh"
] | We consider the problem of distributed mean estimation (DME), in which $n$ machines are each given a local $d$-dimensional vector $\mathbf x_v \in \mathbb R^d$, and must cooperate to estimate the mean of their inputs $\mathbf \mu = \frac 1n\sum_{v = 1}^n \mathbf x_v$, while minimizing total communication cost. DME is a fundamental construct in distributed machine learning, and there has been considerable work on variants of this problem, especially in the context of distributed variance reduction for stochastic gradients in parallel SGD. Previous work typically assumes an upper bound on the norm of the input vectors, and achieves an error bound in terms of this norm. However, in many real applications, the input vectors are concentrated around the correct output $\mathbf \mu$, but $\mathbf \mu$ itself has large norm. In such cases, previous output error bounds perform poorly.
In this paper, we show that output error bounds need not depend on input norm. We provide a method of quantization which allows distributed mean estimation to be performed with solution quality dependent only on the distance between inputs, not on input norm, and show an analogous result for distributed variance reduction. The technique is based on a new connection with lattice theory. We also provide lower bounds showing that the communication to error trade-off of our algorithms is asymptotically optimal. As the lattices achieving optimal bounds under $\ell_2$-norm can be computationally impractical, we also present an extension which leverages easy-to-use cubic lattices, and is loose only up to a logarithmic factor in $d$. We show experimentally that our method yields practical improvements for common applications, relative to prior approaches. | [
"distributed machine learning",
"mean estimation",
"variance reduction",
"lattices"
] | Accept (Poster) | https://openreview.net/pdf?id=t86MwoUCCNe | https://openreview.net/forum?id=t86MwoUCCNe | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"m7tEffdIKX",
"TqIxAhycdIv",
"RGkQkJ4UqDy",
"UzKl56kk0nU",
"LhhF_5j2BL-",
"ABAyBsr8An0",
"enu39uJn5mz",
"PQduCKMT62Q",
"JSvCgGlsPzS",
"My5vf08l9U5",
"T-b-dTEcdmv"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040353956,
1605215055554,
1605214908708,
1605214753488,
1605214296452,
1605214180295,
1605213851052,
1604046740273,
1603851718430,
1603817472898,
1603705723730
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3418/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3418/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3418/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3418/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3418/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3418/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3418/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3418/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3418/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3418/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Poster)\", \"comment\": \"This paper presents a new algorithm for distributed multivariate mean estimation. This method performs significantly better than previous approaches when the input vectors have large norm but are relatively close to each other. The approach relies on lattices and randomized rounding. The approach is evaluated experimentally as well. Overall, there is consensus among the reviewers that this work solves a clean problem using non-trivial ideas. I recommend accepting the paper.\"}",
"{\"title\": \"Individual response\", \"comment\": \"**The actual algorithm used does not match the optimal bounds given.\\nGiven the nature of the problem the constants may be relevant instead of using O notation in particular in the actual algorithm presented and used in experiments.**\", \"our_responses_to_these_two_valid_criticisms_are_related\": \"as the reviewer notes, there is a difference between the \\u201ctheory\\u201d and \\u201cpractical\\u201d versions of our algorithms. The former achieve optimal theoretical bounds, but involve parts which in some cases can be computationally prohibitive; the latter are efficiently implementable but do not achieve the optimal theoretical bounds on worst-case inputs. For this reason, we evaluate the theoretical algorithms using asymptotic notation, but do not attempt to optimize the constants since these will not be the versions implemented in practice. Likewise, we believe the practical algorithms are best evaluated experimentally, and do not give a tight theoretical analysis of them.\\n\\n**Question. Definition 9, the packing radius. Maybe i misunderstand. Is it supposed to be the smallest r such that two balls of radius r centered around any two different lattices points do not intersect? Because that is not what i read from the definition, but that is used in the proofs.**\\n\\nThe reviewer is correct - there is an error in the formal definition, which we will correct in the revision. The packing radius should be the largest (supremum) r such that two balls of radius r centered around any two different lattice points do not intersect.\"}",
"{\"title\": \"Individual response\", \"comment\": \"**Page 17, Lemma 20: Lemma 20 implies that there EXISTS a good coloring. But why is it easy to obtain one for us to use in Algorithm 5?**\\n\\nThe reviewer is correct to note this issue; Algorithm 5 is only the proof-of-concept \\u201ctheory\\u201d version of the algorithm with error detection, and finding such a coloring to use would generally be computationally infeasible. In the full version of the paper we will detail the \\u201cpractical\\u201d version of the algorithm, which uses the simpler, tractable coloring of section A.1 and is straightforwardly implementable. This algorithm, however, has weaker theoretical guarantees: it succeeds on random inputs, and on real data in all our preliminary experiments, with exponentially small failure probability, but would fail against inputs chosen adversarially against the lattice. It remains an important open question to design a practical algorithm which attains the optimal theoretical bounds against adversarial inputs.\\n\\n**The experiments are performed only for $n=2$. It seems to be a very limiting setup. It would be nice to show experiments for larger n. Given that this is mainly a theoretical work, having such experiments is not a major downside, but they are also not very expressive.**\\n\\nWhile most of our experiments are indeed only on two machines and focus on the effect of quantization on only a pairwise interaction, we do present experiments on 8 and 16 machines in the Appendix (Figures 12 and 13) for the real regression dataset (CPUSmall). We will aim to add some multi-machine experiments to the main body in the revision, space permitting.\\n\\n\\n**Page 5, last line of statement of Theorem 1: Should $|z\\u2212x_u|=O(\\\\epsilon)$ be $|z\\u2212x_v|=O(\\\\epsilon)$?**\\n\\nNo; the setting of Theorem 1 is that $u$ wishes to provide $v$ with a good estimate $z$ of $x_u$, so the error of that estimate is what we wish to bound by $O(\\\\epsilon)$.\\n\\n**Appendix F: There is a sentence saying: -- \\\"In this section we show an extension to the quantization scheme (and thereby also Algorithm 3) allowing us to use a sublinear (in $d$)\\\" But in Theorem 27, the bound is $O(d \\\\log(1+q))$. I do not understand why this is sublinear in $d$. Why does it make sense to make $q=o(1)$? Shouldn't $q$ be at least a constant?**\\n\\nThis is perhaps an unnatural way of expressing the communication complexities; we use it to combine the upper bounds for both sublinear and superlinear communication complexities into a single expression, and for the sublinear case we indeed take $q=o(1)$. The sublinear algorithm does not need $q$ to be at least a constant, though the linear/superlinear algorithm does. A more natural way to think about the sublinear upper bound is that $O(y^2 c^2)$ output variance uses $O(d/c)$ bits, for $c>1$, and we will clarify this in the revision.\"}",
"{\"title\": \"Individual response\", \"comment\": \"**The paper says: -- \\\"By choosing the leader randomly we can obtain tight bounds in expectation on the number of communication bits used per machine, and by using a more balanced communication structure such as a binary tree we can extend these bounds in expectation to hold with certainty.\\\" Although not very crucial for this work, one should note here that using a binary tree structure would increase the number of rounds (or the communication time) by a log\\u2061n factor until the final result is obtained.**\\n\\nYes - here we chose only to consider the number of bits as our communication measure for simplicity, but in many network models one might also be concerned with round complexity, and there is a trade-off between the two measures. In the revision we will make note of this, and comment that when synchronization rounds are expensive, the one-round star-topology algorithm may be preferable.\\n\\n**Algorithms 3 and 4 perform communication setup, i.e., electing a leader or making a binary-tree like communication network. What is the communication cost of these? Is it accounted for anywhere?**\\n\\nThe reviewer makes an important point - we chose to omit the costs of leader election and overlay network construction from our stated complexities, for several reasons:\\n* In the settings of most interest (in particular, whenever $d > polylog(n)$), these costs will be negligible compared to those incurred by the mean estimation algorithms;\\n* The exact costs will depend on the specific capabilities of the communication model, which to a large extent we can otherwise abstract;\\n* There is often a trade-off between communication cost in bits, and other measures which we do not track in this work (such as round complexity);\\n* These setup costs need only be incurred once, even if mean estimation is performed multiple times (as, for example, during distributed SGD);\\n* Leader election and overlay construction are basic network primitives which will most likely have already been performed before our algorithms are run.\\n\\nAs an example, in models which allow all-to-all communication, such as CONGESTED CLIQUE or MPC, we can perform leader election using only $O(1)$ expected communication bits per machine. An algorithm for doing so is the following: machines choose random IDs in $[n^3]$, and send a single bit to all other machines in the round equal to their ID, if they have not yet received a bit from any other machine. In this way, all machines are aware of the machine with the lowest ID, in $O(1) $ expected communication bits (but with an $O(n^3)$ round complexity, which would generally be considered prohibitively high). In general, in most reasonable communication models, we could perform leader election and overlay construction in, at worst, $polylog(n)$ rounds and expected bits per machine, which is dominated by the mean estimation cost when $d > polylog(n)$.\\n\\nIn the revision we will state more explicitly that these costs are separate, and must be accounted for in whatever specific distributed system one wishes to use.\"}",
"{\"title\": \"Individual response\", \"comment\": \"**One particular request I have is to describe the computationally simple algorithm via the reduction to $\\\\ell_{\\\\infty}$ directly, not using lattices, since it's much simpler this way (it would be random rotation followed by rounding and hashing).**\\n\\nWe thank the reviewer for this suggestion; we will add such a description in the revision.\"}",
"{\"title\": \"Individual response\", \"comment\": \"**It would be beneficial if the authors discuss and compare the following straightforward approach: We can use a previously known algorithm and allow the machines to obtain a potentially inaccurate estimate of the mean, say $\\\\hat \\\\mu$. Then, we can run the algorithm again. This time vectors being $x_v - \\\\hat \\\\mu$. In this step, we bring down the points closer to the center. Thus, the accuracy increases over time. We can repeat this process several times until we achieve the desired accuracy.**\\n\\nThis is an interesting proposal. We first note that, if we applied a previously known algorithm, we would still obtain error bounds in terms of input norms rather than differences, due to the first iteration, so we would not see a direct comparison. The comparative performance would, as currently, depend on the disparity between input norms and differences. \\n\\nHowever, we could also apply our own algorithm in this fashion to compare the performance: in this case, we see that we would obtain the same asymptotic communication bounds by iterating as we would by reaching the target accuracy in one iteration. To see this, consider two iterations, in which we first reduce input variance by a multiplicative factor of $q_1$, and then by a factor of $q_2$, overall decreasing variance by $q = q_1 q_2$. In total we use $O(d \\\\log q_1 + d \\\\log q_2) = O(d \\\\log q)$ bits per machine, matching the complexity of our current one-shot algorithm.\\n\\n**It might be worth looking at a series of papers that studied the mean estimation of random vectors in a non-distributed setting. They mainly focus on the tail behavior of the estimate. They use different estimates other than average, such as the median of means. See \\u201cOn the estimation of the mean of a random vector\\u201d and more recent papers that cited this one.**\\n\\nSince our theoretical work is so far only concerned with asymptotic results, we were content to approximate sample mean (which is an asymptotically optimal estimator for any reasonable class of distributions) and concentrate on minimizing the additional error incurred by quantization. However, the reviewer is quite right - if one aims to minimize the constant factor in the error, there is a rich line of research from statistics into better estimators of the mean. Using these in a distributed fashion would add another layer of technical difficulty, since they are not necessarily aggregate functions and so could not be combined over a binary tree as in Algorithm 4. We will add discussion of this in the revision.\\n\\n**It is unclear from the main text which convex hull is picked. Maybe it worth discussing some high-level explanations in the main text as well.**\\n\\nAny convex hull of lattice points containing the input vector suffices for the theoretical results; procedures for finding such convex hulls will depend on the lattice and norm chosen. We will add discussion of what this means for the cubic lattice in particular (where such a convex hull can be found by a simple rounding procedure).\\n\\n**Bottom of page 4: Using Chebyshev\\u2019s inequality would only guarantee that a constant fraction of points is within distance $O(\\\\sigma \\\\sqrt{n})$.**\\n\\nThis statement in the paper is aimed primarily at providing an informal intuition, but what we mean by it is the following:\\nChebyshev\\u2019s inequality implies that an input is within distance $c \\\\sigma \\\\sqrt{n}$ of the true vector ${\\\\mathcal \\\\nabla}$ with probability at least $1 - 1/(c^2 n)$. So, by a union bound we have that with probability $1-1/(c^2)$ all inputs are within distance $c \\\\sigma \\\\sqrt{n}$ of ${\\\\mathcal \\\\nabla}$, and (setting $c$ to be a sufficiently large constant) therefore within $O(\\\\sigma \\\\sqrt{n})$ distance of each other.\\n\\n**Page 7, Line 1: $O(\\\\sigma^2)$ -> Shouldn\\u2019t be $O(\\\\sigma^2/n)$?**\\n\\nNo, although we should perhaps rephrase for clarity in the revision. The message of that line is that Theorem 7 implies (possibly surprisingly) that in order to agree even on an output with $\\\\Theta(\\\\sigma^2)$ output variance, we need $\\\\Omega(d \\\\log n)$ bits, the same asymptotic amount required to reach optimal $O(\\\\sigma^2/n)$ output variance.\"}",
"{\"title\": \"General revision plan\", \"comment\": \"We thank the reviewers for their time and helpful comments. We aim to provide individual responses addressing the points of each review, highlighting the changes we will make in the revision to reflect these points. Minor comments requiring straightforward corrections are omitted from these responses, but will naturally correct in the revision. We thank the reviewers for drawing attention to them.\\n\\nWe plan to incorporate all the mentioned changes in the revision, and would welcome and greatly appreciate further comment from reviewers if they feel there are issues which we still have not adequately addressed, so that we may make further improvements.\"}",
"{\"title\": \"Reviewer2\", \"review\": \"Summary: This paper studies the problem of mean estimation of n vectors in R^d in a distributed setting. There are n machines. Each has one data point, a vector x_v. The goal is to compute the mean of x_v\\u2019s for v = 1, \\u2026, n with as few bits of communication as possible.\\nTheir main contribution is using a new quantization method by exploiting the structure of Lattices. The idea is that each machine randomly assigns its point to a nearby lattice point. Then, the lattice point can be expressed by a few bits. While two points might be assigned to the same bit strings (since the lattice has infinitely many points), the hope is that these points are far apart from each other.\\nFurthermore, we need to ensure that other machines could decode the bit string and obtain the same lattice point again and use this point to compute the average. Thus, another machine would interpret the bit string as a close-by lattice point. Note that since we assume some guarantees that all the x_v's are close to each, it suffices that a machine selects the closest lattice point that its description would match the bit string. \\nExperimental evaluation is also provided.\", \"overall_evaluation\": \"The authors consider an essential problem and use an interesting idea to solve it. Adding more discussion and literature review might help improve the paper. The writeup could be improved as well.\", \"major_comments\": [\"It would be beneficial if the authors discuss and compare the following straightforward approach: We can use a previously known algorithm and allow the machines to obtain a potentially inaccurate estimate of the mean, say \\\\hat \\\\mu. Then, we can run the algorithm again. This time vectors being x_v - \\\\hat \\\\mu. In this step, we bring down the points closer to the center. Thus, the accuracy increases over time. We can repeat this process several times until we achieve the desired accuracy.\", \"It might be worth looking at a series of papers that studied the mean estimation of random vectors in a non-distributed setting. They mainly focus on the tail behavior of the estimate. They use different estimates other than average, such as the median of means. See \\u201cOn the estimation of the mean of a random vector\\u201d and more recent papers that cited this one.\"], \"minor_comments\": [\"It is unclear from the main text which convex hull is picked. Maybe it worth discussing some high-level explanations in the main text as well.\", \"Bottom of page 4: Using Chebyshev\\u2019s inequality would only guarantee that a constant fraction of points is within distance O(\\\\sigma \\\\sqrt{n}).\", \"Abstract, Line 3: where \\\\mu is defined, \\u201c\\\\n\\u201d is missing.\", \"Page 2, last paragraph: \\u201cthanto\\u201d -> other than the\", \"Page 7, Line 1: O(\\\\sigma^2) -> Shouldn\\u2019t be O(\\\\sigma^2/n)?\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Nice to know result, which solves a particular (one may say pathological) case of an important problem, no groundbreaking techniques\", \"review\": \"The paper considers a particular setting of distributed mean estimation problem, where each party has a vector of potentially large $l_2$ norm, yet this vectors are fairly close to each other. The goal is to communicate as few bits as possible and estimate the mean of the vectors. Previous approaches had the dependence on the size of the ball containing all the vectors, which gives bad bounds if vectors are long (but close to each other).\\n\\nThe idea is to decompose each vectors into a convex hull of points in some lattice, then probabilistically round to one of these points, hash lattice points down to a short string and communicate this string. This allows to recover the points we rounded to for each party and thus estimate the mean.\\n\\nIn order for the communication to be efficient, the cover radius and packing radius should be within $O(1)$ from each other. For $\\\\ell_2$ norm, this is achievable for random lattices, however such lattices are computationally intractable. The authors notice that we can reduce the original mean estimation problem to the $\\\\ell_\\\\infty$ case (incurring the logarithmic loss in the accuracy) and then simply use the cubic lattice.\\n\\nOverall, I think the result is fairly interesting. None of the techniques are amazingly new (lattice quantization was used before in various context, e.g., locality-sensitive hashing, quantization + hashing down to a short string is a fairly standard idea as well, reduction from $\\\\ell_2$ to $\\\\ell_\\\\infty$ for mean estimation was also used before as well, for, e.g., statistical queries), but I like the clean end result.\\n\\nOne particular request I have is to describe the computationally simple algorithm via the reduction to $\\\\ell_\\\\infty$ directly, not using lattices, since it's much simpler this way (it would be random rotation followed by rounding and hashing).\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"A result on the distributed mean estimation problem among N machines parametrized by input variance instead of input norm.\", \"review\": \"The paper studies the distributed mean estimation problem where $N$ machines (each holding 1 value $x_u$) wish to compute the mean of all $N$ values.\\nAlong with values $x_u$, each machine receives a common value $y$. This value $y$ upper-bounds $\\\\|x_u - x_v\\\\|$ over all machines $v$. The parameter $y^2$ is called the input variance.\\nThe authors propose a lattice-based algorithm whose quantization parameter allows a trade-off between numbers of bits needed to be communicated and the output variance of the estimated mean. One crucial contribution of the paper is that it provides guarantees with respect to input variance instead of input norm (which can be large if the inputs do not have 0 mean).\\n\\n\\nThe paper is well-written. It has a clear description of the problem and provides a natural motivation. It also gives a great overview of prior work. The main idea of the algorithm is also explained in a clear way.\\n\\nThis work studies a basic and important problem in distributed computation. It combines and proposes several interesting ideas. I especially like the idea of lattice-based quantization and using local information, i.e., $x_v$, together with y to perform decoding.\", \"the_paper_says\": \"-- \\\"By choosing the leader randomly we can obtain tight bounds in expectation on the number of communication bits used per machine, and by using a more balanced communication structure such as a binary tree we can extend these bounds in expectation to hold with certainty.\\\"\\nAlthough not very crucial for this work, one should note here that using a binary tree structure would increase the number of rounds (or the communication time) by a $\\\\log n$ factor until the final result is obtained.\\n\\nAlgorithms 3 and 4 perform communication setup, i.e., electing a leader or making a binary-tree like communication network. What is the communication cost of these? Is it accounted for anywhere?\\n\\n\\nPage 17, Lemma 20:\\nLemma 20 implies that there EXISTS a good coloring. But why is it easy to obtain one for us to use in Algorithm 5?\\n\\n--- Experiments ---\\nThe fonts in plots are too small.\\nThe experiments are performed only for $n=2$. It seems to be a very limiting setup. It would be nice to show experiments for larger $n$. Given that this is mainly a theoretical work, having such experiments is not a major downside, but they are also not very expressive.\\nAmong Figures 1, 2 and 3, in only one plot the x-axis starts from 10, but in the rest from 0. Would be nice to be consistent.\\n\\n\\n\\n--- Other comments ---\\n\\nPage 1, Line 3: Should $\\\\mu = \\\\sum_v x_v$ be $\\\\mu = 1/n \\\\cdot \\\\sum_v x_v$?\", \"page_2\": \"\\\"thanto the origin\\\" -> \\\"than to the origin\\\"\\n\\nPage 4, Line 5 of 2nd paragraph of Section 2.1: Is there a typo in \\\"By choosing an appropriate of lattices\\\"?\\n\\nPage 5, first line: Would be nice to emphasize that this is a probabilistic process as $z$ is actually sampled from the linear representation of $x_u$. \\n\\nPage 5, last line of statement of Theorem 1:\\nShould $\\\\| z - x_u \\\\| = O(\\\\varepsilon)$ be $\\\\| z - x_v \\\\| = O(\\\\varepsilon)$?\\n\\nPage 12, 3rd line of 2nd paragraph of section A.1: Should $c_i(\\\\lambda) = \\\\alpha_i mod q$ be $(c_q(\\\\lambda))_i = \\\\alpha_i mod q$?\\n\\nPage 12, 4th and 5th lines of 2nd paragraph of section A.1: $c(\\\\lambda)$ should be $c_q(\\\\lambda)$?\", \"appendix_f\": \"\", \"there_is_a_sentence_saying\": \"-- \\\"In this section we show an extension to the quantization scheme (and thereby also Algorithm 3) allowing us to use a sublinear (in d)\\\"\\nBut in Theorem 27, the bound is O(d log(1+q)). I do not understand why this is sublinear in d. Why does it make sense to make q=o(1)? Shouldn't q be at least a constant?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Clean Distributed Mean Estimation approach\", \"review\": \"The paper considers distributed mean estimation in two variations (mean estimation and variance reduction), applicable for instance in distributed learning where several machines needs to figure out the mean of their locally computed gradients.\\n\\nThe paper measures the quality of an estimator in terms of the input variance, where earlier work has implicitly assumed that the input across the machines had mean zero, and instead measured quality in terms of the inputs\\nIn that sense the approach takes in this paper generalizes previous work. \\n\\nThe authors provide matching upper and lower bounds for the the two problems considered, as well as a practical implementation of the general form of algorithms presented. Finally, experiments back up the quality of the approach considered.\", \"pros\": [\"I think the definition of the problems is natural and clean and the right one to consider (instead of assuming zero centered inputs).\", \"The approach makes application of these algorithms much simpler as the zero mean assumption is removed and does not need to be handled separately\", \"The general latticed based algorithms are natural and very reasonable.\", \"The efficient algorithm instantiation of the general approach is nice.\", \"It is great that the authors provide matching upper and lower bounds and in general the works seems very thorough.\", \"The experiments show the applicability of the general approach.\"], \"cons\": \"- The actual algorithm used does not match the optimal bounds given.\\n- Given the nature of the problem the constants may be relevant instead of using O notation in particular in the actual algorithm presented and used in experiments.\\n\\nThe cons i have listed i think are all small and overall i think this is a good paper as it provides a clean practically applicable version of the problem, the bounds shown are tight and an actual new algorithm is provided and shown to have good practical qualities.\\n\\nQuestion.\\nDefinition 9, the packing radius. Maybe i misunderstand. Is it supposed to be the smallest r such that two balls of radius r centered around any two different lattices points do not intersect? Because that is not what i read from the definition, but that is used in the proofs.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
yEnaS6yOkxy | Class Balancing GAN with a Classifier in the Loop | [
"Harsh Rangwani",
"Konda Reddy Mopuri",
"Venkatesh Babu Radhakrishnan"
] | Generative Adversarial Networks (GANs) have swiftly evolved to imitate increasingly complex image distributions. However, majority of the developments focus on performance of GANs on balanced datasets. We find that the existing GANs and their training regimes which work well on balanced datasets fail to be effective in case of imbalanced (i.e. long-tailed) datasets. In this work we introduce a novel and theoretically motivated Class Balancing regularizer for training GANs. Our regularizer makes use of the knowledge from a pre-trained classifier to ensure balanced learning of all the classes in the dataset. This is achieved via modelling the effective class frequency based on the exponential forgetting observed in neural networks and encouraging the GAN to focus on underrepresented classes. We demonstrate the utility of our contribution in two diverse scenarios: (i) Learning representations for long-tailed distributions, where we achieve better performance than existing approaches, and (ii) Generation of Universal Adversarial Perturbations (UAPs) in the data-free scenario for the large scale datasets, where we bridge the gap between data-driven and data-free approaches for crafting UAPs. | [
"Long-tailed Learning",
"GAN",
"Universal Adversarial Perturbations"
] | Reject | https://openreview.net/pdf?id=yEnaS6yOkxy | https://openreview.net/forum?id=yEnaS6yOkxy | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"3GC4oMIvYo",
"7CoQ9WzMCYG",
"WquLCjHNgW",
"RBIwkLn4aYd",
"Z2Lg3QDOrtL",
"BTiKjJKSx6f",
"_QhabSy_jPu",
"7t6yZcxzfFl",
"C-0hAOsZ6Ch",
"hk7AFnAt9dS",
"ohBUTWW1pb-",
"Fm9JEvNOmHX"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040447919,
1606081091659,
1606080985551,
1606065528251,
1606065222871,
1606065040112,
1605994564882,
1605992313450,
1603895491444,
1603875589707,
1603741051197,
1603058739782
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3415/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3415/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3415/Area_Chair1"
],
[
"ICLR.cc/2021/Conference/Paper3415/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3415/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3415/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3415/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3415/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3415/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3415/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3415/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"The authors have provided very detailed responses and added additional experimental results, which have helped address some of the referees' concerns. However, since the modification made to a vanilla GAN algorithm is relatively small, the reviewers are hoping to see the experiments on more appropriate real-world datasets (not artificially created imbalanced datasets with relatively few classes), more/stronger baselines, and rigorous theoretical/empirical analysis of the method's sensitivity to the quality of the pre-trained classifier. The paper is not ready for publication without these improvements.\"}",
"{\"title\": \"Response to Reviewer 2 (Part 2/2)\", \"comment\": [\"1. Image Generation (Continued from previous part sequentially)\", \"Both the conditional GANs and the proposed regularizer use the same labelled data.\", \"Yes in our current experiments that is the case. But the performance achieved by our method is superior to the performance by cGAN(which is the only one producing balanced distribution). We have achieved better FID in 3 out of 4 cases. Our method is able to get better downstream classifier accuracy in all cases as shown in Table 1. Also we have now provided results with a classifier trained with 0.1 % labeled data in Section 4.2.\", \"Balanced Batch for cGAN training.\", \"We tried the method of balanced resampling and found that the instability issue still persists and we get a FID of 56.89 +/- 0.04 which is worse than using cGAN (FID 48.13 +/- 0.01) without resampling (which we initially used).\", \"2. Universal Adversarial Perturbations (UAP)\", \"Unfairness in comparison of UAP Results\", \"In the data free approaches it is assumed that the classifier is available. All the compared methods (in Table 4) make use of a classifier + some prior data which is not ImageNet.\", \"GDUAP + P - We report GDUAP results with prior texture data of texture obtained from the paper [2].\", \"PDUA + P - Method uses classifier + some prior texture data\", \"AAA - Method uses classifier + Prior data obtained from activation Maximization\", \"MI ADV - Classifier + Prior Data COCO (This is highlighted as the prior data COCO overlaps with Imagenet which can make it less challenging to craft the perturbations)\", \"Our method- Uses classifier (In Regularizer and UAP algorithm) + Prior Comics Data\", \"Data Free UAP practicality: Lots of deep learning models are provided to user devices without releasing the training data due to privacy and other concerns. Also the datasets are huge in size which attracts significant overhead to handle them which is not required in case of data free methods making it efficient for the attacker. So Data Free methods are helpful for adversary creation on models in the above cases.\", \"UAP can be learnt on ImageNet data and clarification of comparison of results.\", \"In the data free method we assume access to the trained classifier which has to be fooled. The training data on which it is learned is not available. In this case, the attacker may use arbitrary or proxy data samples for crafting the perturbations. It is usually considered that attacks which don\\u2019t use ImageNet data are weaker then the attacks created using ImageNet data. To show that our data-free method is comparable to cases when an attack is created using ImageNet data we had added the last row on Table 4. We have now clarified the statement by adding the exact difference in results. We thank you for the suggestion.\", \"Sampling using multiple checkpoints in case of DCGAN\", \"The aim of the regularizer is to shift the focus of GAN towards generation from different minority classes. Due to the limited capacity of DCGAN it is bound to forget modes and shift to new different minority classes (which is the aim of the regularizer). Hence we sample from multiple cycles to cover all classes, For generation of all ImageNet classes simultaneously, large GANs are required. This limited capacity issue is also observed in ACGAN paper where the authors use 100 DCGANs to generate 1000 ImageNet classes. We have added the explanation in the revised paper.\", \"Table 2 is not discussed in the text.\", \"We have now discussed both Table 2 and Table 3 in the text.\"], \"minor_comments\": [\"The height-width ratio of Figure 1b should be rectified.\", \"We have fixed the height to width ratio.\", \"The discussion is not well written. Especially the message of the 2nd bullet point needs more clarifications.\", \"We have clarified the 2nd point by performing experimentation in semi supervised setup.\", \"GAN + regularizer is a semi-supervised setup, not unconditional (as claimed in the paper).\", \"By an unconditional model we refer to the fact that our GAN is still of the form G(z) and does not require a label for generation of sample. Whereas conditional GANs model G(z|y) which requires a label for generation of image. Yes, our method can be used in both supervised and semi supervised settings.\", \"[1] Zhao, S., Liu, Z., Lin, J., Zhu, J. Y., & Han, S. (2020). Differentiable augmentation for data-efficient gan training. _Advances in Neural Information Processing Systems_, _33_. \\\\\", \"[2] Liu, H., Ji, R., Li, J., Zhang, B., Gao, Y., Wu, Y., & Huang, F. (2019). Universal adversarial perturbation via prior driven uncertainty approximation. In _Proceedings of the IEEE International Conference on Computer Vision_ (pp. 2941-2949). \\\\\", \"[3] Brock, A., Donahue, J., & Simonyan, K. (2018, September). Large Scale GAN Training for High Fidelity Natural Image Synthesis. In _International Conference on Learning Representations_.\"]}",
"{\"title\": \"Response to Reviewer 2 (Part 1/2)\", \"comment\": [\"We thank the reviewer for his valuable comments and suggestions. We have tried to improve the paper on suggestions. We provide the clarifications to the concerns below:\", \"1. Image Generation\", \"GAN Baseline of generating samples and using classifiers to provide labels.\", \"From Figure 2 it can be seen that certain classes get mode collapsed in case of unconditional GAN and the distribution of samples learnt is arbitrary (different from long tailed distribution). If we consider a basic probability model of geometric distribution for getting 5k samples from minority class, assuming the distribution of generated samples as in Figure 2. We would require an expected number of 1.2 million sampling steps to get 5k minority samples. This is not a principled/efficient way of generating samples.\", \"Unconditional GAN may have a lesser FID. (Unfairness)\", \"Intuitively that was our expectation as well. But we find the converse being true which is an interesting observation. The FID of uncondtional GAN is better than conditional GAN in long-tail cases (in case of imbalance ratio = 100 as shown in Table 1) which is also described in Section 2.2. Similar observation is also made in the concurrent paper Data Efficient GANs [1] which shows conditional GANs suffer more when used in scarce data scenarios, inline with our experiments.\", \"Experimentation in semi supervised setup with the pre trained classifier trained using other sources.\", \"We thank the reviewer for suggesting these experiments. We have now added Section 4.2 in which we use a classifier which is obtained by fine tuning an ImageNet pretrained model with 0.1% of labelled data. This classifier used in our GAN + Regularizer framework also provides balanced distribution compared to unconditional GAN. Unlike cGAN and ACGAN, which specifically requires labeled samples, our method doesn\\u2019t depend on the labels. All we need is a basic classifier with reasonable accuracy.\", \"Reliance of the method and evaluation on a pre-trained classifier which can overfit on majority classes.\", \"We use a technique called Deferred Reweighting (which ensures all classes have reasonable performance) [1] for training classifiers on long tailed distributions which are used in GAN training framework. Also in the answer above, we show that our method is compatible with classifiers learnt using transfer learning too. We provide per class accuracies for CIFAR10 long tail distribution (imb factor = 100) below: \\\\\"], \"validation_accuracies\": \"[0.934,0.978,0.776,0.715,0.787,0.685,0.776,0.647,0.587,0.620]\\\\\\nThe training samples decrease for each class as we go from left to right in an exponential fashion. \\n * The classifier used for evaluation of all GANs is trained on a balanced dataset and has a higher validation accuracy (see Appendix A.3). This classifier is only used for evaluation and is not used in regularizer formulation for GAN training.\\n\\n* Explanation for ACGAN having a biased distribution:\\n * ACGAN loss consists of two loss terms i.e. GAN Loss (Real/Fake) + Classification Loss. As the GAN distribution is imbalanced the discriminator tends to classify majority class images as real and ignore other classes. When a fake class label is given to a generator to decrease the loss it has two options one is to decrease the GAN loss (Real/Fake) by generating majority class or reduce the classification loss by generating an image of correct class. In such cases sometimes the generator favours to decrease GAN (Real/Fake) loss and ignores the label to generate majority samples which leads to imbalance.\\n* Figures for samples with labels.\\n * We share images of LSUN dataset (Imbalance Ratio = 10) with labels from the classifier. [http://s000.tinyupload.com/?file_id=21497176106115832192/](http://s000.tinyupload.com/index.php?file_id=21497176106115832192). The validation accuracy on balanced validation set for different classifiers used is present in Appendix A.3\\n* The proposed method is also trained with a big batch size of 256, which is very helpful for covering all classes. It would be more useful to see if the method also works well with small batch sizes of 16 or 32, which are common for high resolution GAN image synthesis.\\n * We provide results for DCGAN on CIFAR-10 (Imbalanced ratio = 10) with small batch size using the same hyperparameters below, the low KL Div of GAN class distribution to uniform show that the balancing effect is still preserved for lower batch sizes:\\\\ \\n | Batch Size \\t| FID \\t| KL DIV \\t|\\n|------------\\t|----------------\\t|--------------\\t|\\n| 16 \\t| 50.05 +/- 0.15 \\t| 0.03 +/- 0.0 \\t|\\n| 32 \\t| 38.58 +/- 0.04 \\t| 0.01 +/- 0.0 \\t|\\n| 256 \\t| 30.48 +/- 0.07 \\t| 0.01 +/- 0.0 \\t|\\\\\\nOur results also show the same trend of bigger batch size leading to smaller FID values as seen in [3], when trained for a fixed number of iterations.\"}",
"{\"title\": \"Discussion needed\", \"comment\": \"Dear Reviewers,\\n\\nThe authors have provided a detailed response and uploaded their revised manuscript. Would you please take a careful look at their response and revision? Please respond to the authors and update your review accordingly.\\n\\nThanks,\\nAC\"}",
"{\"title\": \"Response to Reviewer 1 (Part 2/2)\", \"comment\": \"* Lack of Applications\\n * In the revised paper (Section 4.2), we demonstrate training a class-balanced GAN using 0.1% labeled data. This shows that using the proposed regularizer, we can leverage pre-trained classifiers (trained on other related datasets), which was not possible in ACGAN and cGAN. This reduces the labeled data requirement for GAN training and also ensures that all classes are learnt uniformly even in the presence of class imbalance in training dataset.\\n * Fairness Application: In addition to the applications discussed in the paper, our method can be used in fairness applications as well, where the classifier gives feedback about a certain attribute to equalize it in generation. For example Equalizing the proportion of males and females in the generated distribution.\\n * We request the reviewer to kindly consider our Data-Free UAP experimental results as well. This is an application of the proposed regularizer, which helps the generation of diverse class images for data-free UAP generation. Our proposed data-free method currently surpasses the state-of-the-art method which uses data. \\n* We thank the reviewer for the additional comments, which have been fixed in the revised version. The terms FID and UAP have been defined in the Abstract and Introduction. \\n\\n[1]Cao, K., Wei, C., Gaidon, A., Arechiga, N., & Ma, T. (2019). Learning imbalanced datasets with label-distribution-aware margin loss. In _Advances in Neural Information Processing Systems_ (pp. 1567-1578).\\n\\n[2] Kang, B., Xie, S., Rohrbach, M., Yan, Z., Gordo, A., Feng, J., & Kalantidis, Y. (2019, September). Decoupling Representation and Classifier for Long-Tailed Recognition. In _International Conference on Learning Representations_.\\n\\n[3] Tang, K., Huang, J., & Zhang, H. (2020). Long-tailed classification by keeping the good and removing the bad momentum causal effect. _Advances in Neural Information Processing Systems_, _33_.\\n\\n[4] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., & Hochreiter, S. (2017). Gans trained by a two time-scale update rule converge to a local nash equilibrium. In _Advances in neural information processing systems_ (pp. 6626-6637).\\n\\n[5] Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., & Courville, A. C. (2017). Improved training of wasserstein gans. In _Advances in neural information processing systems_ (pp. 5767-5777).\"}",
"{\"title\": \"Response to Reviewer 1 (Part 1/2)\", \"comment\": [\"We are glad that the reviewer finds our method interesting. We provide clarifications to the questions asked below:\", \"Concern regarding the requirement of classifiers in the framework for hard imbalance datasets.\", \"We do understand the reviewer\\u2019s concern about getting classifiers to work on long tailed distributions. In our experiments, we train the classifier also in the long-tailed setting using Deferred Reweighting[1]. Secondly, the progress on research in training classifiers on long-tailed data distributions is significantly better when compared to the progress on the front of training GANs on long-tailed distributions [1,2,3] . In our work we leverage the progress on classifier training to improve GAN training on long-tailed distributions.\", \"Our method can also leverage available pre-trained classifiers on other similar datasets such as ImageNet, as is common practice. We show the results of our approach in a semi-supervised scenario, where we fine-tune an ImageNet pre-trained model using 0.1% labelled data in Section 4.2. Such approaches cannot be used in conditional GANs.\", \"Conditional GANs (both cGAN and ACGAN) have a classifier integrated in the discriminator, which is crucial for training the generator. So, these architectures also suffer from the same class-imbalance issues as classifier training.\", \"Conflicting loss terms in the objective:\", \"**Aren't these two terms then conflicting?** We believe there is no conflict in the two terms. The regularizer penalization is dependent on the current distribution of GAN generated samples ($N^t$) and does not focus on a particular set of minority classes throughout the training. The regularizer tends to increase the proportion of class k for which $N_k^t$ is lower (i.e. minority in current GAN distribution). This adaptive phenomenon in regularizer ensures that GAN distribution becomes uniform. In the absence of the regularizer, the GAN would cater to the majority classes, and in such a case the GAN objective appears to be conflicting to the objective of the regularizer. However, in the presence of the regularizer, the GAN can generate images from minority classes. Since the primary objective of the Generator is to be able to fool the discriminator, this can be achieved on both majority and minority classes. Thus, the regularizer objective only aids the GAN towards generating minority class data, and does not go against the GAN objective.\", \"**If so, can this conflict impact the convergence of the network?** We find similar convergence in FID values of GAN with and without the regularizer. We have provided the curve of FID vs number of iterations in Figure 5.\", \"Results on large datasets and perceptual quality:\", \"Results on other datasets: In the revised paper (Table-3), we show results on long tailed CIFAR-100 dataset (Imbalance Ratio =10, SNResGAN architecture), where we are able to get better FID and also generate a balanced distribution similar to cGAN.\", \"Rationale for choosing LSUN and CIFAR-10: It has been shown in existing works [4, 5] that the current GAN architectures work well on CIFAR-10 and LSUN. Since we aim to highlight a potential issue in the existing GAN implementations, we used long-tailed versions of the same dataset to bring out the issues in the long-tailed case.\", \"In our UAP experiments (Section 4.3), we generate 128 x 128 images using a DCGAN on a 1000 class dataset. We show that we are able to generate 968 distinct classes using the proposed approach. We are the first to show that a data free method is able to surpass the state-of-the-art data-driven method on the ImageNet dataset.\", \"iNaturalist contains a very large number ( > 4000) of classes and images, and to the best of our knowledge, there are no existing GAN papers which show Image Generation baseline results on this dataset. Similarly no baselines were found for Imagenet-LT as well. We could not show results on this dataset due to computational limitations. We show the generation of a large number of classes for UAP experiments (on ImageNet) and also show that our method works for long-tailed CIFAR-100 dataset.\", \"**_Does this method still preserve a good perceptual image?_** Yes, we find that our regularizer is able to generate more perceptual images compared to the baseline (without regularizer) across all datasets. We use FID to measure the quality and diversity in images. We have provided generated images in Figure 8, 9 and 10 in Appendix for LSUN and CIFAR-10 and CIFAR-100 datasets.\"]}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"We thank the reviewer for his comments and suggestions. We provide answers to the questions raised by the reviewer and describe the changes we have made:\\n* Relationship between class distribution and the regularizer\\n\\n * $\\n\\\\underset{\\\\hat{p} }{\\\\max} \\\\sum_{k} \\\\frac{\\\\hat{p}_k\\\\log(\\\\hat{p}_k)}{N_k^t} \\n$\\n\\n The class distribution of the generated samples is used as inverse weight in the above regularizer term. The $N^t$ is class distribution of GAN samples in cycle t and $\\\\hat{p}$ is approximation of batch distribution. If $N_k^t$, the probability of class-k in the current generated distribution is low, then $\\\\hat{p}_k$ (i.e concentration of class-k) for the batches in the next cycle is encouraged to be larger as it would yield an improved objective value. Whereas for another class, which has a large share in the current distribution (large $N_k^t$), having a large $\\\\hat{p}_k$ is not advantageous as it has a large denominator of $N_k^t$. We have updated the notation to make it more clear and also added additional explanation to describe the connection in Section 3.2.\\n\\n\\n\\n* Similarity of Objective to Maximize Entropy and concern of small batch size.\\n * The regularizer term mentioned in the above section is a weighted version of the entropy which is maximized. The weight for class k is the inverse of the class probability $N_k^t$ which is estimated by steps in Section 3.1. If we consider an extreme case where batch_size < num_classes the entropy can still be maximized by generating a fixed set of (batch_size) number of classes in the training process and network can ignore other classes. But in our case due to the weight of $N_k^t$ an increased value of weighted entropy objective can be obtained by generating more samples from classes which have low $N_k^t$. This allows GAN to shift its focus from a fixed set of classes to other minority classes, this shifting process continues till $N^t$ attains uniform distribution. We provide results for DCGAN in table below on CIFAR-10 (Imbalanced ratio = 10) with small batch size using the same hyperparameters below, the low KL Div of GAN class distribution to uniform show that the balancing effect is still preserved for lower batch sizes: \\n| Batch Size \\t| FID \\t| KL DIV \\t|\\n|------------|----------------\\t|--------------\\t|\\n| 16 | 50.05 +/- 0.15 \\t| 0.03 +/- 0.0 |\\n| 32 | 38.58 +/- 0.04 \\t| 0.01 +/- 0.0 |\\n| 256 | 30.48 +/- 0.07 \\t| 0.01 +/- 0.0 |\\nOur results also show the same trend of bigger batch size leading to smaller FID values as seen in [3], when trained for a fixed number of iterations. We also request reviewer to refer to our UAP results in Sections 4.3 where we also train a GAN with our regularizer, with batch size of 512 to generate samples from 968 diverse classes.\\n\\n\\n* Performance on large-scale imbalanced datasets\\n * Results on other datasets: In the revised paper (Table-3), we show results on long tailed CIFAR-100 dataset (Imbalance Ratio =10, SNResGAN architecture), where we are able to get better FID and also generate a balanced distribution similar to cGAN.\\n * Rationale for choosing LSUN and CIFAR-10: It has been shown in existing works [1, 2] that the current GAN architectures work well on CIFAR-10 and LSUN. Since we aim to highlight a potential issue in the existing GAN implementations, we used long-tailed versions of the same dataset to bring out the issues in the long-tailed case. \\n * In our UAP experiments (Section 4.3), we generate 128 x 128 images using a DCGAN with a batch size of 512. We show that we are able to generate 968 distinct classes using the proposed approach. We are the first to show that a data free method is able to surpass the state-of-the-art data-driven method on the ImageNet dataset.\\n * iNaturalist contains a very large number ( > 4000) of classes, and to the best of our knowledge, there are no existing GAN papers which show Image Generation baseline results on this dataset. We could not show results on this dataset due to computational limitations. We show the generation of a large number of classes for UAP experiments (on ImageNet) and also show that our method works for long-tailed CIFAR-100 dataset. \\n\\n[1] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., & Hochreiter, S. (2017). Gans trained by a two time-scale update rule converge to a local nash equilibrium. In _Advances in neural information processing systems_ (pp. 6626-6637).\\n\\n[2] Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., & Courville, A. C. (2017). Improved training of wasserstein gans. In _Advances in neural information processing systems_ (pp. 5767-5777).\\n\\n[3] Brock, A., Donahue, J., & Simonyan, K. (2018, September). Large Scale GAN Training for High Fidelity Natural Image Synthesis. In _International Conference on Learning Representations_.\"}",
"{\"title\": \"Response to Reviewer 4\", \"comment\": [\"We thank the reviewer for the valuable comments.\", \"We would like to first clarify the significance of the proposed loss function, and highlight its difference with respect to a simple weighted classification loss. The loss function used in the paper is shown below:\", \"$\\\\underset{\\\\hat{p} }{\\\\max} \\\\; \\\\sum_{k} \\\\frac{\\\\hat{p}_k\\\\log(\\\\hat{p}_k)}{N_k^t} $\", \"Although the regularizer term in the above equation resembles the weighted loss in structure, the term is different from the weighted cross entropy (classification) loss. In the regularizer term the $\\\\hat{p}_k$ is an approximation to the fraction of class-k samples in the batch. The distribution of the batch is encouraged to generate samples from classes which have lower $N_k^t$ (i.e. low concentration in generated output at a particular time t), and hence achieve a balanced distribution.\", \"The weighted cross entropy (classification) loss requires ground truth labels for the training images. Contrary to this, in our approach we only require outputs from a pre trained classifier. We demonstrate the effect of regularizer though the experiments in Figure 1(b) and theoretical results in Proposition 1.\", \"Dataset too simple, need results on with larger pixel numbers and more categories\", \"Results on other datasets: In the revised paper (Table-3), we show results on long tailed CIFAR-100 dataset (Imbalance Ratio =10, SNResGAN architecture), where we are able to get better FID and also generate a balanced distribution similar to cGAN.\", \"Rationale for choosing LSUN and CIFAR-10: It has been shown in existing works [3, 4] that the current GAN architectures work well on CIFAR-10 and LSUN. Since we aim to highlight a potential issue in the existing GAN implementations, we used long-tailed versions of the same dataset to bring out the issues in the long-tailed case.\", \"In our UAP experiments (Section 4.3), we generate 128 x 128 images using a DCGAN with batch size 512. We show that we are able to generate 968 distinct classes using the proposed approach. We are the first to show that a data free method is able to surpass the state-of-the-art data-driven method on the ImageNet dataset.\", \"Issue with baseline, current SOTA reaches lower FID\", \"We would like to clarify that we use the DCGAN architecture for CIFAR-10 experiments. We share the mean FID score in the following table for comparison to recently published results [1, 2] with the same architecture but in a different hyperparameter setup:\", \"| Method \\t| ACGAN \\t| cGAN \\t| SNDCGAN \\t|\", \"|--------------------\\t|----------\\t|----------\\t|----------\\t|\", \"| Published Results \\t| 21.44[1] \\t| 19.52[1] \\t| 27.50[2] \\t|\", \"| Our Results \\t| 24.21 \\t| 18.79 \\t| 27.05 \\t|\", \"State-of-the-art GANs use ResNet based large GANs which we use for LSUN experiments (in Table 1) to show the compatibility of the regularizer with both architectures.\", \"Comparison with data augmentation and resampling\"], \"resampling\": \"1. Resampling requires labels for the training samples, whereas our method requires only a classifier. We show that our method is also effective in semi semi-supervised setting with 0.1 % labels in Section 4.2 of revised paper.\\n2. We tried Resampling for cGAN as it was unstable (for CIFAR-10 with Imbalance Ratio = 100). It gives worse results (FID 56.89 +/- 0.04) compared to the no resampling case (FID 48.13 +/- 0.01).\", \"data_augmentation\": \"Data augmentation forces the discriminator to learn better semantic features, which can be used with our method as well, to improve the results. We thank the reviewer for this suggestion, and we will investigate further on this.\\n\\n[1] ContraGAN: Contrastive Learning for Conditional Image Generation M Kang, J Park - _Advances in Neural Information Processing Systems_, 2020\\n\\n[2] Kurach K., Lu\\u010di\\u0107 M., Zhai X., Michalski, M., and Gelly S., A large-scale study on regularization and normalization in GANs. In _International Conference on Machine Learning_, 2019.\\n\\n[3] Heusel M., Ramsauer H., Unterthiner T., Nessler B., and Hochreiter S., Gans trained by a two time-scale update rule converge to a local nash equilibrium. In _Advances in Neural Information Processing Systems_, 2017\\n\\n[4] Gulrajani I., Ahmed F., Arjovsky M., Dumoulin V., and Courville A. C., Improved training of wasserstein gans. In _Advances in Neural Information Processing Systems, _2017.\"}",
"{\"title\": \"The idea is intuitive and hope to have more experiments\", \"review\": \"This paper tries to solve the data imbalance problem in conditional GAN by adding a classification loss as the constraint. This constraint can be seen as weighted softmax and the weight is the smoothed number of classes showed before. \\n\\nThis paper is well written and structured. The idea is intuitive. The authors applied the methods in long-tail classification to GAN. In the experiment part, the authors first did the image generation from long-tailed distribution on CIFAR10 and LSUN, the apply this technique to data-free universal adversarial perturbation. The results are better than the ones without the proposed constraints.\\n\\nMy main concern is that the dataset used in the first part is too simple to reflect the strength of the algorithm. Since the method is very intuitive, more experiments on datasets with larger pixel number and more categories are more convincing. \\n\\nAnother concern is the baseline. Since current state-of-art conditional GAN can reach lower FID (compare with FID score in the last column of Table 1), it might be better to compare with those methods. And there exist other intuitive baselines like data-augmentation and resampling. is it fair to compare your methods with these?\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Questionable significance of the presented results\", \"review\": \"Paper summary:\\n\\nThe paper proposes a regularizer to force an unconditional GAN generator to produce samples that follow a uniform class distribution. To provide feedback to the generator about the class distribution over the generated images, the proposed method utilizes a pretrained classifier on the same (imbalanced) training dataset. Motivated by the exponential forgetting of earlier tasks in neural networks [1], the regularization term encourages the generator to increase the proportion of samples of an infrequent class after a certain number of iterations and vice versa. Empirical studies are performed to show the effectiveness of the regularization: 1) the paper shows that the proposed method enables generating samples with a uniform class distribution with a GAN trained on a dataset with a long-tailed class distribution and (2) that the method benefits in generating universal adversarial perturbations (UAPs) in the data-free scenario.\", \"pros\": \"1.\\tThe paper studies an important and challenging problem of training GANs on imbalanced dataset.\\n2.\\tThe proposed regularization term is novel, the derivation of the regularization term is well explained.\", \"cons\": [\"1. Image Generation\", \"It is not clear whether it\\u2019s necessary to obtain a set of balanced samples in such a complicated way. Given a pretrained classifier as used in the proposed method, we could simply use the classifier to select samples after training a standard unconditional GAN. This simple baseline experiment is missing in the paper.\", \"The experimental setup in Section 4.1 might be unfair for the unconditional GAN baselines. If the model is trained on an imbalanced dataset and tested on a balanced dataset, then the comparison between the proposed method and unconditional GANs is not fair. FID for the latter might be worse simply because the training and test distributions are different, whereas the proposed method is tailored for this special purpose.\", \"The paper highlights that the method can generate images with uniform class distribution even in the unconditional case, where no labels are given to the generator. This would be useful if the classifier training is decoupled from the training data of the generator (this is not the case in the presented experiments, as the classifier uses the same training data as the generator with GT class labels). E.g. the classifier is trained on one dataset and then transferred for GAN training on another similar but unlabelled dataset, producing sharp images, or if only part of the training data was labelled, thus reducing the need for labels. Such experiments would help to support the claims in the paper. After all, the method is quite interesting, since in contrast to a conventional conditional GAN, the discriminator is not provided with the class identity of the generated image directly.\", \"The method and evaluation heavily relies on the quality of the pre-trained classifier. In the presented experimental setup, the classifier is trained on the same training set as the GAN model. So in case of the highly imbalanced training set, it's not clear how well the classifier can recognize the imbalanced classes. Thus the proposed model might suffer from the same problem as the unconditional GAN if the classifier has troubles recognizing imbalanced classes and is biased towards well represented classes.\", \"The problem posed in the paper is that unconditional GANs do not sample images from the present classes uniformly, but are biased by the class distribution in the dataset. This makes sense in the unconditional case, but Figure 2 shows that this also happens in the conditional case for ACGAN. Some further explanation for why that is would be helpful. Figure 2 is not discussed in the text, but it should be.\", \"The produced samples are uniformly classified into different categories by the classifier. It would be good to provide a figure with generated images and the corresponding predicted labels, to see if the prediction and actual content match as well as in the case of a conditional GAN, like cGAN.\", \"The proposed method is also trained with a big batch size of 256, which is very helpful for covering all classes. It would be more useful to see if the method also works well with small batch sizes of 16 or 32, which are common for high resolution GAN image synthesis.\", \"Both the conditional GANs and the proposed regularizer use the same labelled data.\", \"The paper says that in the highly imbalanced case cGAN sufferes from the training instability. In this case how is the batch formed for cGAN training? Do you balance the batch in terms of classes (as the batch size of 256 is quite large)? If not, it would be interesting to see how cGAN performs with the balanced batch.\", \"2. Universal Adversarial Perturbations\", \"The experimental results in Section 4.2 does not look convincing. The listed methods have different degrees of available information (either overlaps with the target dataset or a pretrained classifier on the target dataset). Moreover, for the proposed method, the requirement of a pretrained classifier on the target dataset is a strong limitation.\", \"The paper claims that the method can help generate UAPs in the absence of the target dataset for which a classifier shall be fooled. The target dataset is ImageNet. The dataset on which the auxiliary classifier for the proposed regularization loss is trained is also ImageNet. Hence, it seems the UAPs could have been learned from ImageNet directly. To avoid confusion, sentences like the following need more elaboration: \\\"We also find that our data free results are at par with the recently published method (Zhang et al., 2020) which uses ImageNet training data.\\\"\", \"The following sentence needs more explanation: \\\"Our approach achieves diversity through sampling from multiple checkpoints, as in each cycle the regularizer encourages the GAN to focus on different poorly represented classes\\\". If checkpoints from different training steps are used, it seems that the target distribution is not really captured uniformly, contrary to the aim of the paper. Hence, it would be good to add some explanation here to prevent misunderstandings.\", \"Table 2 is not discussed in the text.\"], \"minor_comments\": [\"The height-width ratio of Figure 1b should be rectified.\", \"The discussion is not well written. Especially the message of the 2nd bullet point needs more clarifications.\"], \"review_summary\": \"The paper shows that the classification and image generation can be decoupled for GANs. This makes the setup semi-supervised, not unconditional (as claimed in the paper). While this is interesting, it would be good for the paper to show an application where this is actually useful, because in all provided examples the ground truth labels of the training set are present and used for training the classifier, but are just not used directly to train the discriminator. For example, when it comes to UAPs it is not clear to what extent the proposed approach is data-free if ImageNet is used for training the regularizer as well as the classifier to be fooled. Thus, the main weaknesses of the paper in my opinion are the significance of the presented results, unfairness in the experimental setup, and the clarity of presentation.\", \"post_rebuttal_feedback\": \"Thanks to the authors for the provided extra experiments and clarifications. I feel that my concerns have been partially addressed, thus raising my score to 5. I still think that the proposed method is limited by the classifier, it's ability to capture a long-tailed distribution, which is not so easy to get when trained on imbalanced dataset. This significantly limits the applications of the proposed approach in real-life scenarios. The paper has also experimented only on artificially created imbalanced datasets, which contain small number of classes with the model being trained with the batch size higher than the number of classes. It would be beneficial to see how the model would perform in more realistic setup when the number of classes is significantly bigger than the batch size (e.g. iNaturalist or even ImageNet), to support more the claims of the paper.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"The method part is not clearly written.\", \"review\": \"This paper focuses on the problem that GANs' poor performance in the imbalanced dataset and presents the class balancing regularizer for training GANs, encouraging the GAN to pay more attention to underrepresented classes.\\nThey induced the class distribution information using a pre-trained classifier, and the regularize utilizes the class distribution to penalize excessive generation of samples from the majority classes, thus enforcing the GAN to generate samples from minority classes.\", \"pros\": [\"The motivation is clear.\", \"It seems to cite the relevant literature (that I know of) and compare it to reasonably established attacks and defenses.\", \"Simple/directly applicable approach that seems to work experimentally, but\"], \"cons\": [\"The method part is not easy to follow. My understanding is the effective class frequency is the cumulative number of generated samples for each class (with a discount factor) and normalizing it yields the distribution of the generated samples. But I didn't get the relationship between this distribution and the following regularizer. Could you comment on that?\", \"As far as I understand, the regularizer encourages all the classes to be balanced within each batch by maximizing the entropy. I am a little bit concerned about this setting, what if we use a small batch size so that the class distribution can be imbalanced within one batch.\", \"Only LSUN subset and CIFAR10, which include 5 and 10 classes respectively. I am wondering about the performance on large-scale imbalanced datasets like iNaturalist.\", \"----\"], \"updated\": \"Thanks to the authors for the provided extra experiments and clarifications. Some of my concerns (e.g., how the batch size affected the performance) have been diminished, but I do agree with other reviewers that more baselines should be included.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting Simple Idea but not Convinced about Motivation and Results\", \"review\": [\"**Overview**: The paper presents a simple regularizer term that aims to force a GAN to generate samples following a uniform distribution over different classes. The regularizer depends on a classifier that works well on an imbalanced or long-tailed dataset. The paper presents experiments on CIFAR-10 and LSUN that were synthetically long-tailed or imbalanced. The results show that the proposed term generates samples that follow a more uniform distribution over classes.\", \"*Pros*:\", \"Interesting idea as it can help a generative algorithm to remove an imbalance from a dataset.\", \"The proposed regularizer is simple but depends on a classifier (see below for more details).\", \"*Cons*:\", \"The regularization term depends on a classifier that works well already on the imbalanced dataset. Getting a classifier to work on long-tailed datasets is not an easy task and people are still investigating the development of techniques to learn from imbalanced datasets (see for example I). From a practical point of view, this is a hard requirement that can reduce the chances of adoption.\", \"Proposed loss may have conflicting terms. The final loss composed of the relativistic loss and the regularizer may be conflicting. According to the text (below Eq. 3), this loss follows the training distribution which in the context of the paper is long-tailed. However, the proposed regularizer penalizes the GAN to generate samples following a long-tailed distribution. Aren't these two terms then conflicting? If so, can this conflict impact the convergence of the network?\", \"Insufficient experiments. While the experiments show good results on two small and synthetically long-tailed datasets, it is unclear if this method can work on naturally long-tailed datasets (e.g., iNaturalist). Unfortunately, the CIFAR-10 and LSUN datasets have a small set of classes in them. How does this method work on naturally long-tailed (e.g., iNaturalist) and/or large-scale datasets with a larger set of classes (e.g., ImageNet-LT)? Also, how do the generated images look like? Does this method still preserve a good perceptual image?\", \"Lack of clear impact on applications. After reading the introduction, I did not have a clear application where this component can be crucial to either enable a new application or solve a bottleneck. The discussion section briefly mentions a few applications. However, I think the paper would've been stronger if it showed experiments using the proposed approach and showing its impact on a clear application.\"], \"references\": \"I. Liu et al. Large-Scale Long-Tailed Recognition in an Open World. CVPR 2019.\", \"minor_comments\": \"1. The contribution list of the Introduction section uses terms that have not been defined, i.e., FID and UAP.\\n2. If using latex, please use \\\\min, \\\\max, \\\\log to properly display the operators.\\n\\n----------------------------------------------------\\nPost Rebuttal Update\\n\\nWhile I think the idea is interesting, I still think the proposed loss is not consistent as I still think the two terms in the loss collide with each other, its practical value is limited mainly because making a GAN to work on various datasets is a challenging task, and that the experiments now raised more questions than answers. For these reasons I still lean towards rejection as I believe the paper can benefit from a revision.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
bi7nTZy4QmH | Learning Contextual Perturbation Budgets for Training Robust Neural Networks | [
"Jing Xu",
"Zhouxing Shi",
"Huan Zhang",
"Jinfeng Yi",
"Cho-Jui Hsieh",
"Liwei Wang"
] | Existing methods for training robust neural networks generally aim to make models uniformly robust on all input dimensions. However, different input dimensions are not uniformly important to the prediction. In this paper, we propose a novel framework to train certifiably robust models and learn non-uniform perturbation budgets on different input dimensions, in contrast to using the popular $\ell_\infty$ threat model. We incorporate a perturbation budget generator into the existing certified defense framework, and perform certified training with generated perturbation budgets. In comparison to the radius of $\ell_\infty$ ball in previous works, the robustness intensity is measured by robustness volume which is the multiplication of perturbation budgets on all input dimensions. We evaluate our method on MNIST and CIFAR-10 datasets and show that we can achieve lower clean and certified errors on relatively larger robustness volumes, compared to methods using uniform perturbation budgets. Further with two synthetic datasets constructed from MNIST and CIFAR-10, we also demonstrate that the perturbation budget generator can produce semantically-meaningful budgets, which implies that the generator can capture contextual information and the sensitivity of different features in input images. | [
"adversarial robustness",
"certified robustness",
"certfied robust training"
] | Reject | https://openreview.net/pdf?id=bi7nTZy4QmH | https://openreview.net/forum?id=bi7nTZy4QmH | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"SEZJLQTlCtz",
"ytFx8Pvek3j",
"8AWxv51hPj",
"GLWqv3N2SHV",
"TxhYuQH36f_",
"Nqq5N1h3ezX",
"i-BtE7jJrdZ",
"ZVxYMMvemS4",
"8LRW2MTJjM5",
"2nPRuBVImVm",
"KMVUjUsdNpx",
"QICd9osZivN",
"SDtweLl0xEg",
"0NJiIASOXIW"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040351642,
1606029470706,
1606029245775,
1606029078930,
1606028722250,
1606028554146,
1606028012619,
1606027678091,
1606027430647,
1606027387624,
1604659351056,
1604154482503,
1603815556149,
1603394478055
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3414/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3414/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3414/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3414/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3414/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3414/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3414/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3414/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3414/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3414/AnonReviewer5"
],
[
"ICLR.cc/2021/Conference/Paper3414/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3414/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3414/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"Reviewers raised various concerns about the motivation, unclear justification of the idea and claim, insufficient comparison with related work, and weak experimental results. While authors had made efforts to improve some of these issues in the rebuttal, the revision was not satisfied for publication quality. Overall, the paper has some interesting idea, but is not ready for publication.\"}",
"{\"title\": \"Reply to AnonReviewer2: Novelty and comparison to Liu's paper (1/3)\", \"comment\": \"**(There are currently 3 posts in total for our author response to AnonReviewer2. This is the first one, and the other 2 are presented below this one.)**\\n\\nDear AnonReviewer2,\\n\\nThank you for your efforts in reviewing our paper and providing insightful suggestions. In our response, we provide a detailed comparison to the method of Liu et al., provide additional experiments and give justifications for why the attacker has to obey our contextual perturbation budgets.\", \"here_are_our_detailed_responses_to_your_questions_and_concerns\": \"### Novelty of our method and comparisons to Liu\\u2019s paper\\n\\nWe apologize for not citing this insightful paper. We have updated our paper which cites the work of Liu et.al [2]. Despite both Liu's method and our method considering non-uniform perturbation bounds, we have made significantly different contributions. We summarize the differences and provide additional experiments as follows:\\n\\n1. We are a robust training method: \\n\\nIn Liu\\u2019s work, they propose to compute a largest possible certified volume for a **pretrained model**. They don\\u2019t update model parameters. In contrast, our method can **train the classifier jointly with the generator**, so that our model can learn semantically meaningful perturbation budgets.\\n\\nAdditionally, Liu\\u2019s method cannot be extended to an efficient training method, as their method requires solving a constrained optimization problem via the Lagrangian method for each input example using a large number of iterations, which is too costly for training. We introduce a perturbation generator network with nearly no additional cost in training compared with existing uniform certified defense methods.\\n\\n2. Different problem formulation: \\n\\nLiu\\u2019s method **maximizes the volume** of the perturbation budget for a fixed network under the constraint that the network prediction is correct within this perturbation region, while our method fixes the volume of the perturbation budget and jointly trains a perturbation generator with the classifier to **maximize the robust accuracy**, which matches the training objective of prior works for certified defense.\\n\\n3. Certification approach and efficiency:\\n\\nAs for certification in the inference stage, Liu\\u2019 method is still too inefficient as a large number of optimization iterations are required for every batch, while we only need a forward propagation with a small perturbation generation network, and use the efficient (CROWN-)IBP method to obtain certificates.\\n\\n4. Semantic and contextual perturbation:\\n\\nOur visualization of the generated perturbation budget and experiments on Watermarked MNIST and Doubled CIFAR-10 (see section 4.2,4.3 for details) demonstrates that our perturbation budgets indeed learns contextual knowledge. This implies that training the classifier and the generator jointly enables the budget generator to capture the contextual information of input images, rather than just optimizing training objectives, while Liu\\u2019s work did not have such analysis.\\n\\n5. Additional experiments:\\n\\nIn Section 4, we conduct additional experiments to evaluate models with the method in Liu et al., as also shown in the \\u201cNew experiments and comparisons\\u201d section in this response. Our method is capable of training robust models on larger robustness volumes and achieving lower verified errors.\"}",
"{\"title\": \"Justification on non-uniform perturbation budgets (2/3)\", \"comment\": \"### Why the attackers have to obey non-uniform perturbation budget\\n\\nThe review is concerned that a real-world attacker would not obey the perturbation budgets generated by our method. However, real world adversaries do not have to restrict themselves to the popular $\\\\ell_p$ threat model as well. The use of the $\\\\ell_p$ threat model is mostly for mathematical convenience and is sometimes problematic in real settings. In fact, the contextual perturbation budget in our method is a more realistic threat model for the following reasons:\\n\\nFirst, real world attackers indeed have to **follow the semantics** of the input in order to produce meaningful perturbations. The goal of our perturbation generator is to learn \\u201cwhat is the maximal allowable perturbation **that preserves ground-truth label given the current context**\\u201d? A successful attack has to achieve the following two goals: fooling the model and maintaining the ground truth label of the input. If the attacker simply disregards the semantics of the input image, the second condition can be violated, for the perturbed image will either be too messy to look like a real image, or the ground truth label of the perturbed image will be different from that of the original image (e.g., see the \\u201cinvariance adversarial examples\\u201d in [1]), and this is not an adversarial attack anymore. \\n\\nIdeally, **an attacker needs to obey our learned perturbation budget, to avoid changing the ground-truth label**. Existing works use a simple $\\\\ell_p$ norm to determine if the ground-truth model is changed or not. Because the $\\\\ell_p$ threat model treats each pixel equally, while different features in the input image in fact have quite different importance depending on context, and [1] has demonstrated the cases that the ground-truth label can change within $\\\\ell_p$ norm perturbation budget. Our non-uniform defense, in contrast, is semantically meaningful as more important pixels are assigned a smaller perturbation radius according to our contextual perturbation budgets. This is in fact a more realistic and useful threat model than $\\\\ell_p$ norm.\\n\\nUltimately, **the goal of robust machine learning is to learn a classifier close to human perception, rather than \\u201crobust\\u201d to certain $\\\\ell_p$ norms**. Human perception is non-uniform (humans focus on important features even though these features can be sensitive to small noise) and context dependent (what part of image is important heavily depends on what\\u2019s on the image). Admittedly, the perturbation generation process in our paper still has large room to improve to eventually match human perception. However, we believe it is import to think beyond the $\\\\ell_p$ robustness model for robust training, especially in our work we consider *context dependent* perturbation budget, which was never proposed by prior works. We hope our paper can help the community understand the limitation of $\\\\ell_p$ norm and think beyond it.\\n\\nAdditionally, another important motivation for our work is that real datasets have varied sensitivity on each feature, and our method enables robust training on these datasets effortlessly. In contrast, the $\\\\ell_p$ threat model is problematic because it is hard to define an uniformly good perturbation budget, and such a budget can be feature dependent and contextual so not easily defined manually. This prevents the usage of robust training in practical settings. For example, in a medical imaging setup, the difference between health and unhealthy tissues may be subtle and can only be recognized by experts. Using an uniform $\\\\ell_\\\\infty$ perturbation for robust training is likely to destroy the subtle but important features, reducing clean accuracy. Training on a uniform radius can harm the clean accuracy of the model. Empirically, we have reported initial results on the MedMNIST dataset to support this point (see details below).\"}",
"{\"title\": \"Additional experiments and about minor comments (3/3)\", \"comment\": \"### Additional Experiments\\n1. Evaluating models using Liu et al, 2019\\u2019s method\\n\\nWe have reproduced the algorithm proposed in Liu et al., 2019 with their open-source code, and the new results have been updated to **Table 1 and Table 2**. On MNIST, when models are evaluated by Liu et al., the model trained with $\\\\epsilon=0.4$ uniform budget has a little lower verified error (5.88 v.s. 7.97) than the model with learned budgets. However, as the target volume is increased, on $\\\\epsilon_0=0.6$ and $\\\\epsilon_0=0.8$, models trained with uniform budgets totally fail with verified error 100.0%, while our models can still achieve reasonable verified errors (13.93 on $\\\\epsilon_0=0.6$ and 25.77 on $\\\\epsilon_0=0.8$). On CIFAR-10, our models can **achieve lower verified errors than models trained with uniform budgets**, even if the uniform budget models are evaluated with Liu et al. \\n\\n2. MedMNIST\\n\\nTo show the importance of using a non-uniform perturbation budget in a more realistic setting, we conduct additional experiments on the MedMNIST dataset. MedMNIST [3] is a recently developed medical imaging dataset. There are 10 sub-datasets and we adopt the OrganMNIST(axial) subset consisting of body organ CT images (please refer to Appendix B for more details). On target robustness volume $\\\\epsilon_0=16/255$, we train a model with learned perturbation budgets and uniform budgets respectively. For the model with learned budgets, the clean error is 21.8 and the verified error is 50.6, while for the model with uniform budgets, the clean error is 28.0 and the verified error is 57.5. Using learned contextual perturbation budgets, we are able to obtain a model with lower clean and verified errors under the same target robustness volume. We visualize learned budgets in Figure 6 in Appendix B.\\n\\n\\n### About minor comments\\n\\n1. We thank the reviewer for pointing out the typo.\\n2. About the second \\u201cminor comment\\u201d, we think ours is correct. $g_\\\\theta(x)$ is generated by the generator network, but the generated value is relative to $l$ and $u$ range constraints and not absolute perturbation budgets. $g_\\\\theta(x)$ is turned into perturbation budgets $\\\\tilde{\\\\epsilon}(x)$ as shown in the \\u201cInitial perturbation budgets\\u201d paragraph, and thus it is $\\\\tilde{\\\\epsilon}(x)$ in the summation you mentioned.\\n\\n\\n### Conclusion\\nWe hope our responses can address your questions and concerns about the paper. We would also be happy to answer any other questions you may have.\", \"references\": \"[1] Tram\\u00e8r F, Behrmann J, Carlini N, et al. \\u201cFundamental tradeoffs between invariance and sensitivity to adversarial perturbations\\u201d ICML 2020\\n\\n[2]Liu, Chen, Ryota Tomioka, and Volkan Cevher. \\u201cOn certifying non-uniform bound against adversarial attacks.\\u201d ICML 2019.\"}",
"{\"title\": \"Reply to comments by AnonReviewer1\", \"comment\": \"Dear AnonReviewer1,\", \"thank_you_for_carefully_reading_our_paper_and_we_address_your_helpful_comments_below\": \"1. Network architecture of the generator:\\n\\nWe use a simple two-layer convolutional neural network in the generator. It was previously included in Sec 4.1 in the initial version of our paper, and now we have moved it with other implementation details to Appendix A. The architecture is purely convolutional and can be easily scaled to large inputs by changing the input parameters. \\n\\n2. Larger dataset:\\n\\nIn our paper, we focus on certified adversarial defense with provable robustness, but unfortunately these methods can only scale to relatively small datasets. For example, for CIFAR-10, the state of the art method needs to train on 32 TPUs for a few hours [1], or on 4 GPUs for 1 day. This approach is scaled to larger dataset such as TinyImageNet in a very recent work [2]. Due to limited time in the discussion period, we were not able to conduct new experiments on TinyImageNet, but technique developed in [2] can be applied to us as well.\\n\\n3. Overhead of our method.\\n\\nThe size of the perturbation budgets generator, which is a simple two-layer convolutional neural network in our experiments as mentioned above, is small compared with the size of the classifier which has up to 5 convolutional layers. What\\u2019s more, our algorithm enables efficient joint training of the classifier and the generator, so our non-uniform perturbation budget generator will introduce very little additional costs compared with methods with uniform budgets. Our experiments also demonstrate that a simple convolutional neural network suffices to capture the contextual information in MNIST and CIFAR10 datasets. \\n\\nEmpirically, we have evaluated the training time: on MNIST, it is 11.02s/epoch for the uniform one and 12.28s/epoch for the non-uniform one; on CIFAR-10, it is 14.62s/epoch for the uniform one and 16.72s/epoch for the non-uniform one. Experiments are done on an Nvidia GTX 1080Ti GPU. The measured overhead is only around 15%. For the model size, on MNIST, there are 4.9M parameters in the generator and 13.3M parameters in the classifier; on CIFAR-10, there are 8.4M parameters in the generator and 17.1M parameters in the classifier. The majority of parameters are in the last fully-connected layers. \\n\\n4. Visualization:\\n\\nTo demonstrate that our perturbation budget generator can indeed identify sensitive (important) features and choose a smaller perturbation budget for these pixels, we construct two artificial datasets, namely Watermarked MNIST and Doubled Cifar10, presented in Section 4.3 and 4.4. Our results show that the learned perturbation budget is highly correlated with the important features in the input image.\", \"references\": \"[1] Huan Zhang, Hongge Chen, Chaowei Xiao, Bo Li, Duane Boning, and Cho-Jui Hsieh. Towards stable and efficient training of verifiably robust neural networks. In International Conference on Learning Representations, 2020.\\n\\n[2] Kaidi Xu, Zhouxing Shi, Huan Zhang, Yihan Wang, Kai-Wei Chang, Minlie Huang, Bhavya\\nKailkhura, Xue Lin, and Cho-Jui Hsieh. Provable, scalable and automatic perturbation analysis\\non general computational graphs, 2020.\"}",
"{\"title\": \"Reply to comments by AnonReviewer3\", \"comment\": \"Dear AnonReviewer3,\\n\\nWe thank the reviewer for the encouraging comment and giving us valuable advice. We address your concerns as follows:\\n\\n1. Hyperparameters\\n\\nHyperparameters $\\\\underline{\\\\alpha}$ and $\\\\bar{\\\\alpha}$ are to ensure a minimum robustness on each pixel and also prevent the model from over-invariant on some pixels. In our experiments, we intuitively set these hyperparameters and just used some simple values -- 2.0 for $\\\\bar{\\\\alpha}$, 0.5 and 0.125 for $\\\\underline{\\\\alpha}$ on MNIST and CIFAR-10 respectively. We set a smaller $\\\\underline{\\\\alpha}$ on CIFAR-10 because prior works usually use much smaller perturbation radius on this dataset, e.g., 2/255 or 8/255, while we are trying with target robustness volume 32/255. We expect the minimum allowed perturbation budget is smaller than the largest radii used in prior works [1]. If these hyperparameters are changed, models tend to achieve lower verified error if the range constraint defined by $\\\\underline{\\\\alpha}$ and $\\\\bar{\\\\alpha}$ is looser and vice versa. In practice, such hyperparameters can be set according to the need, just as the robustness volume or perturbation radius. \\n\\n2. IBP results\\n\\nAs shown in [1], IBP performs worse than CROWN-IBP in all settings, so we propose our framework based on CROWN-IBP. We have also conducted an additional experiment on MNIST with volume $\\\\epsilon_0=0.8$, where verified error by IBP is 27.09 while CROWN-IBP produces a lower (better) verified error of 26.37.\\n\\n3. Thank you for pointing out the notation problem. We have modified the notation in the current version of our paper according to your suggestion.\\n\\n4. In our paper, we focus on certified defense where the robustness can have a provable guarantee. On certified defense, CROWN-IBP [1] in ICLR 2020 is the state-of-the-art method. Our method has been extensively compared to the models obtained by CROWN-IBP which originally used uniform budgets in experiments.\\n\\nWe thank you again for all the helpful comments by the reviewer, and please kindly let us know if you have any additional comments.\", \"references\": \"[1] Huan Zhang, Hongge Chen, Chaowei Xiao, Bo Li, Duane Boning, and Cho-Jui Hsieh. Towards stable and efficient training of verifiably robust neural networks. In International Conference on Learning Representations, 2020.\"}",
"{\"title\": \"Reply to AnonReviewer5: Motivations and Justifications (1/4)\", \"comment\": \"**(There are currently 4 posts in total for our author response to AnonReviewer5. This is the first one, and the other 3 are presented below this one.)**\\n\\nDear AnonReviewer5,\\n\\nThank you for thoroughly reading our paper and offering us insightful suggestions. In our response, we provide more motivations and justifications, provide comparison experiments as you requested (including comparisons to Liu et al.), and include some initial results on MedMNIST (a new medical imaging dataset) where the non-uniform perturbation is important.\", \"here_are_our_detailed_responses_to_your_questions_and_concerns\": \"### Motivations and Justifications\\n\\nThe reviewer\\u2019s main concern is about the threat model. As mentioned by the reviewer, \\u201can attacker would not conveniently restrict themselves to the radius learned during training\\u201d. However, in fact, a real attacker would also not restrict them to the common $\\\\ell_p$ norm radius. The use of $\\\\ell_p$ norm is mostly for mathematical convenience, and is largely inappropriate for many realistic scenarios.\\n\\nFirst, the perturbations from real world attackers have to **follow the semantics of the image** to generate meaningful attacks. The goal of our perturbation generator is to learn \\u201cwhat is the maximal allowable perturbation **that preserves ground-truth label given the current context**\\u201d? An attacker is successful only if it can fool the network while preserving the ground truth label of the image at the same time. If the attacker completely disregards the context of the input, the ground truth label of the perturbed image can differ from that of the original image (e.g., see the \\u201cinvariance adversarial examples\\u201d in [1]), and this is not an adversarial attack anymore. By comparison, our contextual perturbation budget gives detailed characterization for the ability of the attacker by generating perturbation radius for each pixel. Ideally, if a model is robust within our generated perturbation budget, we can expect that an attacker can hardly change the model output and preserve the ground-truth label simultaneously. This is in fact a more realistic and useful threat model than $\\\\ell_p$ norm.\\n\\nAnother important motivation for our work is that real datasets have varied sensitivity on each feature, and our method enables robust training on these datasets effortlessly. In contrast, the $\\\\ell_p$ threat model is problematic because it is hard to define an uniformly good perturbation budget, and such a budget can be feature dependent and contextual so not easily defined manually. This prevents the usage of robust training in practical settings. For example, in a medical imaging setup, the difference between health and unhealthy tissues may be subtle and can only be recognized by experts. Using an uniform $\\\\ell_\\\\infty$ perturbation for robust training is likely to destroy the subtle but important features, reducing clean accuracy. Training on a uniform radius can harm the clean accuracy of the model. Empirically, we have reported initial results on the MedMNIST dataset to support this point (see details below).\\n\\nUltimately, **the goal of robust machine learning is to learn a classifier close to human perception, rather than \\u201crobust\\u201d to certain $\\\\ell_p$ norms**. Human perception is non-uniform (humans focus on important features even though these features can be sensitive to small noise) and context dependent (what part of image is important heavily depends on what\\u2019s on the image). Admittedly, the perturbation generation process in our paper still has large room to improve to eventually match human perception. However, we believe it is import to think beyond the $\\\\ell_p$ robustness model for robust training, especially in our work we consider *context dependent* perturbation budget, which was never proposed by prior works. We hope our paper can help the community understand the limitation of $\\\\ell_p$ norm and think beyond it.\"}",
"{\"title\": \"Comparisons to the work of Liu et al. (2/4)\", \"comment\": \"### Comparisons to the work of Liu et al.\\n\\nWe apologize for not citing this insightful paper. We have updated our paper which includes discussions on Liu et al [2]. We have made significantly different contributions compared to Liu et al. as the reviewer mentioned, \\u201cthey optimize the budget to maximize the volume rather than use a generator to produce perturbation budgets, and do not train\\u201d. We also provide additional experiments to demonstrate that our method is capable of training models with larger robustness volumes and outperforming models trained with uniform budgets, even if the models trained with uniform budgets are evaluated with Liu\\u2019s method. We summarize our main differences below:\\n\\n1. We are a robust training method: \\n\\nIn Liu\\u2019s work, they propose to compute a largest possible certified volume for a **pretrained model**. They don\\u2019t update model parameters. In contrast, our method can **train the classifier jointly with the generator**, so that our model can learn semantically meaningful perturbation budgets.\\n\\nAdditionally, Liu\\u2019s method cannot be extended to an efficient training method, as their method requires solving a constrained optimization problem via the Lagrangian method for each input example using a large number of iterations, which is too costly for training. We introduce a perturbation generator network with nearly no additional cost in training compared with existing uniform certified defense methods.\\n\\n2. Different problem formulation: \\n\\nLiu\\u2019s method **maximizes the volume** of the perturbation budget for a fixed network under the constraint that the network prediction is correct within this perturbation region, while our method fixes the volume of the perturbation budget and jointly trains a perturbation generator with the classifier to **maximize the robust accuracy**, which matches the training objective of prior works for certified defense.\\n\\n3. Certification approach and efficiency:\\n\\nAs for certification in the inference stage, Liu\\u2019 method is still too inefficient as a large number of optimization iterations are required for every batch, while we only need a forward propagation with a small perturbation generation network, and use the efficient (CROWN-)IBP method to obtain certificates.\\n\\n4. Semantic and contextual perturbation:\\n\\nOur visualization of the generated perturbation budget and experiments on Watermarked MNIST and Doubled Cifar-10 (see section 4.2,4.3 for details) demonstrates that our perturbation budgets indeed learns contextual knowledge. This implies that training the classifier and the generator jointly enables the budget generator to capture the contextual information of input images, rather than just optimizing training objectives, while Liu\\u2019s work did not have such analysis.\\n\\n5. Additional experiments:\\n\\nIn Section 4, we conduct additional experiments to evaluate models with the method in Liu et al., as also shown in the \\u201cNew experiments and comparisons\\u201d section in this response. Our method is capable of training robust models on larger robustness volumes and achieving lower verified errors.\"}",
"{\"title\": \"New experiments and comparisons (3/4)\", \"comment\": \"### New experiments and comparisons\\n\\nWe provide additional experiments to show why the contextual perturbation budget is necessary and demonstrate that our method has superior performance over existing methods.\\n\\n1. Evaluating models using Liu et al, 2019\\u2019s method\\n\\nWe have reproduced the algorithm proposed in Liu et al., 2019 with their open-source code, and the new results have been updated to **Table 1 and Table 2**. On MNIST, when models are evaluated by Liu et al., the model trained with $\\\\epsilon=0.4$ uniform budget has a little lower verified error (5.88 v.s. 7.97) than the model with learned budgets. However, as the target volume is increased, on $\\\\epsilon_0=0.6$ and $\\\\epsilon_0=0.8$, models trained with uniform budgets totally fail with verified error 100.0%, while our models can still achieve reasonable verified errors (13.93 on $\\\\epsilon_0=0.6$ and 25.77 on $\\\\epsilon_0=0.8$). On CIFAR-10, our models can **achieve lower verified errors than models trained with uniform budgets**, even if the uniform budget models are evaluated with Liu et al. \\n\\n2. Evaluating models on uniform budgets\\n\\nAs requested by the reviewer, we have updated our Table 1 and Table 2. Now for models trained with both learned budgets and uniform budgets, we evaluate them under three methods: fixed uniform budgets, learned budgets, and non-uniform certification by Liu et al.\\n\\nIt is true that models with learned budgets tend to have higher verified errors when they are evaluated with uniform budgets. However, achieving good robustness on uniform budgets is not the goal of the paper, nor it is the ultimate goal of robust machine learning. We have argued in the \\u201cMotivations and Justifications\\u201d section above, and we believe that using non-uniform and contextual budgets is a better setting than uniform budgets. For example in the MNIST $\\\\epsilon=0.4$ setting, [1] showed that we can actually find many images within this $\\\\ell_\\\\infty$ perturbation budget that *changed the ground-truth label*.\\n\\nNevertheless, if we want to maintain a satisfactory verified accuracy on a smaller uniform budget and also pursue a robustness under learned non-uniform budgets, we may set $\\\\underline{\\\\alpha}$ which controls the minimum allowed budget for each pixel. To show an example, we train a model on MNIST with $\\\\epsilon_0=0.5$ and $\\\\underline{\\\\alpha}=0.6$, which means the minimum budget of each pixel is 0.3. As a result, verified error on non-uniform budgets is 10.46, while the verified error on uniform $\\\\epsilon=0.3$ budget is 9.96, which is close to the 9.40 verified error reported in CROWN-IBP[4] on the same \\u201cDM-small\\u201d model trained with uniform budgets. And if $\\\\epsilon_0=0.6, \\\\underline{\\\\alpha}=0.5$, the verified error on uniform $\\\\epsilon=0.3$ is 11.22 which is also not far away from 9.40, while now our model has a robustness on a volume as large as $0.6^n (n=28*28)$.\\n\\n3. MedMNIST\\n\\nMedMNIST [3] is a recently developed medical imaging dataset. There are 10 sub-datasets and we adopt the OrganMNIST(axial) subset consisting of body organ CT images (please refer to Appendix B for more details). On target robustness volume $\\\\epsilon_0=16/255$, we train a model with learned perturbation budgets and uniform budgets respectively. For the model with learned budgets, the clean error is 21.8 and the verified error is 50.6, while for the model with uniform budgets, the clean error is 28.0 and the verified error is 57.5. Using learned contextual perturbation budgets, we are able to obtain a model with lower clean and verified errors under the same target robustness volume. We also visualize learned budgets in Figure 6 in Appendix B.\"}",
"{\"title\": \"Differentiability for PGD training (4/4)\", \"comment\": \"### Differentiability for PGD training\\n\\nThe loss is technically differentiable to the perturbation budgets however it is much more complicated. For PGD training, for example, if we run $N$ steps, each step we add a noise roughly $\\\\frac{\\\\epsilon}{N}$, so the $\\\\epsilon$ is used by every step of PGD. Eventually, to get the correct gradient w.r.t $\\\\epsilon$, we will need to backprop through the $N$ PGD update steps since $\\\\epsilon$ affects every step of PGD. In contrast, in certified defense we use Eq. (7) where loss is a direct function of $\\\\epsilon$. In fact, we did some initial experiments with adversarial training and found that it is possible but more tricky to train the perturbation generator. Nevertheless, the main goal of our paper is to present the idea of learning a non-uniform and context dependent perturbation budget, and we use certified defense to demonstrate it just for technical convenience.\\n\\n### Conclusion\\n\\nWe hope the reviewer can understand the motivation behind our work better now and your concerns are addressed. We would love to answer any additional questions you may have.\", \"references\": \"[1] Tram\\u00e8r F, Behrmann J, Carlini N, et al. \\u201cFundamental tradeoffs between invariance and sensitivity to adversarial perturbations\\u201d ICML 2020\\n\\n[2]Liu, Chen, Ryota Tomioka, and Volkan Cevher. \\u201cOn certifying non-uniform bound against adversarial attacks.\\u201d ICML 2019.\\n\\n[3] Yang, Jiancheng, Shi, Rui, and Ni, Bingbing. Medmnist classification decathlon: A lightweight automl benchmark for medical image analysis. arXiv preprint arXiv:2010.14925\\n, 2020.\\n\\n[4] Huan Zhang, Hongge Chen, Chaowei Xiao, Bo Li, Duane Boning, and Cho-Jui Hsieh. Towards stable and efficient training of verifiably robust neural networks. In International Conference on Learning Representations, 2020.\"}",
"{\"title\": \"Uses a deep network to learn non-uniform radii for certified defenses, comparisons are not great\", \"review\": \"The paper proposes using a budget generator to generate non-uniform radii for certification. The budget generator is trained jointly with the certified defense to change the shape of the perturbation set while maintaining the volume to improve certified error.\", \"on_motivation\": \"the paper could use some more justification or motivation for why we would want to change our perturbation radius during training to maximize certified performance. Typically this the other way around: we have a set we want to defend against, and so the certified defense optimizes for this specified set since we're trying to defend against a particular attack. The setup here is strange in this regard, because rather than adapting a defense to a threat model, the threat model is being adapted to the defense, where the defense is defined by a fixed volume but is otherwise whatever the defender trains it to be. Since an attacker would not conveniently restrict themselves to the radius learned during training, this doesn't really make much sense from the point of view of certifying robustness to adversarial examples (since it doesn't defend against *all* perturbation sets with the specified volume, it only defends against one which isn't specified a priori).\", \"on_comparisons\": \"The authors compare their certified defense with non-uniform budgets to certified defenses with uniform budgets. In its current form, this is completely incomparable: the uniform budgets are trained and certified with uniform radius, while the learned budgets are trained and certified with learned budgets. Since the learned budgets are almost certainly different from the uniform budgets, these are completely different threat models. It would be much more informative to the reader to report results on *both* types of budgets for *both* models instead, rather than only showing half of the story. Specifically, this means\\n(a) evaluating all the models trained with learned budgets using the uniform budgets that are more typical in the literature and reflect what they were actually trained for (e.g. <= 0.4 for MNIST, <= 8/255 for CIFAR10)\\n(b) evaluating the models trained with uniform budgets using the learned budgets to compare against the models trained with learned budgets\\nIt's quite possible that the learned budgets, while capable of certifying more volume due to the changed threat model, comes at the cost of worse certified performance for a uniform bound of a *smaller* volume (e.g. at uniform radius 0.1 or 0.3 for MNIST, as is commonly reported in the literature). This would also help provide a more realistic and fair comparison to certified defenses with uniform budgets: the current tables report certified accuracy at an extremely large radii well beyond what they were trained for, and so this winds up being a rather a misleading comparison that is not very useful.\", \"on_related_work_and_comparisons_thereof\": \"The authors seem to be unaware of the ICML 2019 publication \\\"On Certifying Non-Uniform Bounds against Adversarial Attacks\\\" by Liu et al., which has studied the problem of certifying non-uniform bounds which maximize the volume (exactly the same type of bound studied in this work). There is still a difference, in that they optimize the budget to maximize the volume rather than use a generator to produce perturbation budgets, and do not train. Nonetheless, this is arguably the most relevant work and has been out for quite a while, and so it would be fair to expect some sort of comparison here. For example, a reasonable experiment could be to simply calculate the non-uniform bounds from Liu et al. on the model trained with a non-uniform budget vs the uniform budget.\", \"minor_comment\": \"The authors mention that the joint training of the classifier and the perturbation budget generator is somehow more difficult for PGD adverarial training \\\"as it is not fully differentiable w.r.t. perturbation budgets\\\". I don't quite get what the authors are trying to say here. My understanding is that the authors perform joint training by backpropagating the robust loss through both the classifier and the perturbation budget generator, since there is no auxiliary loss for the perturbation budget generator. Shouldn't this imply that the standard loss is in fact differentiable with respect to the perturbation budgets, and so PGD is just as applicable as before?\", \"update\": \"I thank the authors for their response. I've read the other reviews as well, and indeed R2 had similar concerns to my own. I'm glad to see the more comprehensive comparison to Liu et al., which paints a fuller picture of the effects and trade-offs of the approach. \\n\\nThe argument behind the motivation, however, feels much like setting up a straw man for Lp robustness. For example, the authors argue that their approach is label and semantics preserving unlike uniform perturbations; however this is quite frankly only the case for extremely large perturbations in MNIST-like settings which are unrealistic by design (most papers do not consider such large radii for exactly this reason). Uniform perturbations seen commonly in CIFAR10/Imagenet settings are practically invisible and consequently are equally semantics preserving and close to human perception. If the authors do wish to pursue this argument that these are truly more semantics preserving, then this needs to be backed by evidence. The authors weakly suggest this is the case because the budgets look similar to the content in the images. However, this does not imply that an adversarial attack within this budget is label preserving (i.e. many of the presented examples have large budgets in the background directly adjacent to the label-content of the image, which can easily change how the content looks), and so this needs to be justified carefully if this claim is to be made. \\n\\nThe authors also incorrectly equate the restrictions imposed on an attacker from learned perturbation radii to that of a uniform radius. These are *not* equivalent, especially in the security setting where these are night and day; the first amounts to the defender choosing the rules of the game that work optimally for them, whereas the latter is a *defense agnostic* rule that both the defender and attacker must obey. This is a significantly easier setting for the defender that needs to be properly motivated, as restricting an adversary to a fixed perturbation set is inherently different from restricting the adversary to a fixed perturbation set that the defender gets to choose. The reason why one would want to maximize certification volume needs to be properly motivated, as it is no longer applicable to the usual adversarial security setting and comes at a cost to the usual robustness considerations. \\n\\nTo recognize the addition of the necessary comparison to past work, I have improved my score slightly. However, I would still argue that this is below the threshold, as their central claim of learning *semantic preserving* perturbation budgets is not justified despite being a central component of the paper, as well as the motivation for why it's considered beneficial to choose the most easily certified volumes for robustness in the first place (and certainly not helpful from a security perspective).\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Contributions are well presented and demonstrated\", \"review\": \"This paper address the problem of training robust neural networks with non-uniform perturbation budgets on different input pixels. In practice, a perturbation budget generator is introduced to generate the context-aware perturbation budget (i.e. conditioned on the input) for each pixel of the input image. A \\u201crobustness volume\\u201d constraint on generated perturbation budgets to control the robustness intensity is also proposed. Extensive experiments on MNIST and CIFAR10 demonstrate the proposed outperform SOTA method under various uniform perturbation budgets.\\n\\n\\nFrom my perspective, the writing of this paper is good, and contributions are well presented and demonstrated by extensive experiments. So I vote for accepting\", \"comments\": [\"How to determine the hyper-parameters \\\\bar{alpha} and \\\\underline{alpha} for each benchmark is still unknown. Are the final results sensitive to these hyper-parameters? Does it take a high cost to adjust these hyper-parameters for different benchmarks?\", \"How about the performance by using IBP?\", \"In Eqn.2, The index i indicates the i-th pixel. But in Eqn.3, it denotes the i-th category label. Please modify this to avoid misunderstanding.\", \"Since I'm not very well versed with the current baseline and state-of-art for variable robust training of DNN, it would be good to compare with other SOTA methods.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Review for \\\"Learning Contextual Perturbation Budgets for Training Robust Neural Networks\\\"\", \"review\": \"This paper proposes to change the perturbation budget for adversarial attacks to a non-uniform setting where differet input pixels have different perturbation budgets. To achieve this, an additional network is trained to learn the perturbation budget for each part of the input. The approach seems to perform better than a uniform perturbation budget and also learns semantically meaningful budgets for the input.\\n\\nI am not an expert on this topic and will, therefore, keep my review quite short.\\n\\nThe idea that not all parts of the input should be treated equally makes sense and is well motivated.\\n\\nQuestions/remarks:\\n- What is the exact architecture of the network that learns the perturbation budged? Is it purely convolutional? Will it easily scale to larger inputs?\\n- It would be interesting to see the performance of your model on more complicated datasets, e.g. Tiny-ImageNet\\n- How much overhead (training time, model size, etc) does the training of the additional network for the perturbation budget introduce?\\n- For the visualization of learned pertubation budget: it would be interesting to also run some analysis/visualization of what parts of the input have the strongest effect on the final prediction to see if this correlated with your learned perturbation budged (i.e. if the parts of the input that have the strongest effect on the final classification also have the smallest perturbation budget)\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Review for \\\"Learning Contextual Perturbation Budgets for Training Robust Neural Networks\\\"\", \"review\": \"Update after the rebuttal:\\n\\nI read response from the authors and other reviews. I have increased my score to 5 given that authors now performed some\\ncomparison with Liu et al. However, I still believe that the threat model is not realistic and that attacker can not be bounded by the \\nbudget that is produced by generator network here. While authors wrote quite detailed response, I did not find it convincing enough. As R5 points out similar issues, I would encourage authors to think more on how to tackle the problem better. Perhaps the threat model can limit the attacker to all possible perturbations under the certain volume budget, but that would be quite different from the idea in this paper. Thus, I can not recommend acceptance at the current state.\\n\\n============================================================================\\n\\n-> Summary:\\n\\nAuthors propose to train provably robust models with respect to the threat model\\nthat assumes non-uniform perturbation budget for the attacker. Under this threat model,\\nattacker is allowed to perturb some pixels more than the others, unlike for the standard\\nl_p threat model here each pixel has a maximum perturbation value. In this work, authors\\npropose perturbation generator which produces bounds for each pixel and certified defense\\nto train provably robust model with respect to the generated perturbation bounds. They show\\nthat trained models have lower natural and verified error than models trained with uniform\\nbounds of equivalent volume.\\n\\n-> Reasons for score:\\n\\nI vote for rejecting this paper. The biggest issue I have is that authors do not compare (and do not cite) \\nwith the work of Liu et al. [1] whose contributions I believe substantially overlap with contributions of\\nthis submission. The other concern I have is that the idea of learning a perturbation budget which\\nthe attacker should obey seems somewhat unrealistic.\\n\\n-> Pros:\\n\\n- I like the general idea of a threat model where maximum perturbations is different for each pixel.\\n- Paper is well written, easy to follow and manages to bring across the main points.\\n- The authors perform experiments on several datasets and present visualizations which help to understand\\nwhat the method has actually produced.\\n\\n-> Cons:\\n\\n- The biggest issue I have with this paper is omission of a prior work of Liu et al. [1] which \\nwas the first paper to consider non-uniform perturbation bounds. Liu et al. consider same\\nvolume constraint as in Equation (4) in this submission and propose a Lagrangian method to maximize\\nlog of the volume. I think it is essential that authors compare their perturbation generator in section 3.2\\nwith the Lagrangian method from Liu et al. and check which of the two methods can generate larger volume. \\nOne contribution that this submission has and Liu et al. does not is that of certified defense as Liu et al.\\nfocus only on certification of already trained networks (I am not sure if their method can be trivially extended \\nfor training). However, given that authors simply use auto_lirpa library for certified defense, contributions there\\nare also limited.\\n- I am not sure that idea of generating perturbation budget is feasible in practice. What happens here\\nis that we generate the threat model under which attacker operates by ourselves and then we somehow expect \\nthat attacker should obey this threat model. For example, let's say that proposed procedure results in \\neps(x) = [0.3, 0.1, 0.1]. Why would attacker have to obey the fact that third pixel is perturbed by at most 0.1?\\nEssentially, the method here learns eps(x) such that attacker can't find adversarial example, but I don't understand\\nin what scenario would attacker be limited by the *particular* eps(x) that this method has produced?\\nI think it would be more sensible to have a defense which guarantees that model is robust under all threat models\\nwhich have volume below some constant V_0, but on the other hand this would be strictly more difficult than the \\nuniform baseline.\\n- Another thing I think is missing is some baseline for computing perturbation budget introduced in 3.3.\\nThis relates to work of Liu et al. who proposed Lagrangian method to do this. Given that algorithm in 3.3 \\nis perhaps the biggest contribution of this work, I would have expected there to be at least some baseline to compare with.\\n\\n[1] Liu, Chen, Ryota Tomioka, and Volkan Cevher. \\\"On certifying non-uniform bound against adversarial attacks.\\\" ICML 2019.\\n\\n\\n-> Questions:\\n\\n- Can authors comment on the work of Liu et al. [1] and what they think are main contributions of their work\\nwhich are not already present in Liu et al.? Ideally, authors should also compare to their method (at least\\ncertification part as they have no training).\\n- Can you explain what would be realistic scenario where attacker has to obey perturbation budget generated by this method?\\n\\n-> Minor comments:\\n\\n- typo: \\\"generailized\\\"\\n- In \\\"Refining the perturbation budgets\\\", third line, should it be [g_\\\\theta(x)]_i under the summand?\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
pwwVuSICBgt | Enabling Binary Neural Network Training on the Edge | [
"Erwei Wang",
"James J. Davis",
"Daniele Moro",
"Piotr Zielinski",
"Claudionor Coelho",
"Satrajit Chatterjee",
"Peter Y. K. Cheung",
"George Anthony Constantinides"
] | The ever-growing computational demands of increasingly complex machine learning models frequently necessitate the use of powerful cloud-based infrastructure for their training. Binary neural networks are known to be promising candidates for on-device inference due to their extreme compute and memory savings over higher-precision alternatives. In this paper, we demonstrate that they are also strongly robust to gradient quantization, thereby making the training of modern models on the edge a practical reality. We introduce a low-cost binary neural network training strategy exhibiting sizable memory footprint reductions and energy savings vs Courbariaux & Bengio's standard approach. Against the latter, we see coincident memory requirement and energy consumption drops of 2--6$\times$, while reaching similar test accuracy, across a range of small-scale models trained to classify popular datasets. We also showcase ImageNet training of ResNetE-18, achieving a 3.12$\times$ memory reduction over the aforementioned standard. Such savings will allow for unnecessary cloud offloading to be avoided, reducing latency and increasing energy efficiency while also safeguarding user privacy. | [
"Binary neural network",
"edge computing",
"neural network training"
] | Reject | https://openreview.net/pdf?id=pwwVuSICBgt | https://openreview.net/forum?id=pwwVuSICBgt | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"BMXzRo7wLHW",
"CgjVTGj5292",
"kCBAi_0l7TX",
"B4kvJu-01oE",
"AMfZReX8924",
"hzax3orV6h",
"5Ucajy1JbJO",
"SB03xhDUg_a",
"nwvrc0YLcUI",
"XWf3ptGW1oC",
"-n9jdfNsfqF"
],
"note_type": [
"comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610992366394,
1610040492479,
1605633016816,
1605632891804,
1605632648488,
1605632425839,
1605631273853,
1603923928068,
1603896115293,
1603867399189,
1603733614553
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Paper3413/Authors"
],
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3413/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3413/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3413/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3413/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3413/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3413/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3413/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3413/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3413/AnonReviewer4"
]
],
"structured_content_str": [
"{\"title\": \"Response to the Program Chairs\", \"comment\": \"We thank the reviewers and program chairs for their feedback on our work. We would like to highlight that our objective was to save *memory* during network training -- the most common resource limitation on edge devices -- and that we did not set out to achieve or make any claims regarding *speedup*. Algorithms 1 and 2, Table 2 and our supplementary materials capture our calculations of memory usage. We believe these results are sufficient to demonstrate the memory reduction claimed, and that therefore building a prototype would not add further scientific value.\"}",
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"After reading the paper, reviews and authors\\u2019 feedback. The meta-reviewer agrees that this paper addresses an important topic. However, as the reviewers pointed out. The paper mainly builds the technique on simulated setting, and it is unclear how the method will translate to real world speedups. Past work(e.g. [1]) has also shown that many cases there could be a huge gap when the solution is not built carefully.\\n\\nThe paper would benefit from a prototype to demonstrate the applicability of the approach. This paper is therefore rejected.\\n\\nThank you for submitting the paper to ICLR. \\n\\n[1] Riptide: Fast End-to-End Binarized Neural Networks\"}",
"{\"title\": \"Response to Reviewer 4\", \"comment\": \">Although there are certain aspects that could be improved, such as including a table outlining in a clearer manner the contributions of the authors in this context.\\n\\nThank you for this suggestion. We now compare more directly against prior low-cost training works targeting non-binary networks in the newly added Table 1.\\n\\n>However, I would like to know if the authors have made an ablation study to assess whether or not the use of batch normalization would have an effect on the accuracy of the proposed models. Do you provide experimental evidence of the lack of degradation due to the use of l1 batch normalization?\\n\\nAuthors including Sari et al. (2019) have reported that, without batch normalization, BNNs suffer from gradient explosion. We confirmed this observation in our early-stage experiments with BinaryNet, as we now state explicitly in Section 3. While we included results showing accuracy change as a result of our $l_1$-based batch normalization in Tables 4 and 5 (now 5 and 6), we now further include results for $l_1$ batch normalization in the style proposed by Wu et al. (2018b), i.e. Section 5.2 step 1 only, in Tables 4 and 5 (now 5 and 6).\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \">The major concern is that the proposed low-cost BNN training scheme can cause a nontrivial accuracy degradation (2.25\\\\%) as shown in Table 5.\\n\\nAs also shown in Table 5 (now 6), this degradation comes in return for memory and energy consumption reductions of 3.12$\\\\times$ and 1.17$\\\\times$, respectively. Whether or not such an accuracy degradation is acceptable will be application-dependent, as we now make explicit in Section 6. In Tables 3 and 5 (now 4 and 6), we found broadly similar degradation across models spanning a range of sizes, thereby demonstrating the generality of our approach.\\n\\n>The tradeoff between accuracy and memory footprint/energy consumption is not carefully evaluated. For example, a smaller network model with fewer parameters and activations can be trained using the baseline BNN training scheme to reduce memory and energy consumption.\\n\\nThese tradeoffs can be found in Tables 4 and 5 (now 5 and 6). While training cost reductions are possible through the selection of different network models, this observation is largely orthogonal to our work: by applying our approach to the training of a smaller model, one can obtain the advantages of both optimized network selection and training, effectively benefiting twice. In Tables 3 and 5 (now 4 and 6), we showed significant savings across models spanning a range of sizes, thereby demonstrating the generality of our approach. We have rephrased and expanded the relevant discussion in Section 6 to more clearly emphasize these points.\\n\\n>Besides, the baseline networks (BinaryNet and ResNet) used for comparison are out-of-date. Many recent works [1-3] propose new BNN architectures, which improve the accuracy of BNNs significantly. It is useful to justify the effectiveness of the proposed scheme for these SOTA BNN architectures.\\n\\nSince the works the reviewer highlights share architectural features with networks for which we obtained positive results -- particularly ResNetE-18's skip connections -- we see no fundamental reason why our approach would not be favorable with these models as well. Moreover, ResNetE-18 was published in 2019, and is therefore more current than Bi-Real Net (2018). We now cite the works highlighted by the reviewer in Section 2, and have added text to Section 6 to explain that ResNetE-18 is representative of a broader class of modern network models.\\n\\n>To evaluate the energy consumption of the traditional and the proposed BNN training scheme, the authors assume that the two training schemes have the same convergence rate. Although the traditional BNN training scheme consumes higher power during training, it might take fewer epochs to reach the same accuracy. Therefore, the saving in energy consumption can be lower than the reported numbers.\\n\\nWe included a subset of our experiments' training curves in Appendix A.2 to show that our method does not result in convergence rate deterioration vs the baseline. To further support this claim, we now include the training curves for all of our experiments in Appendix A.2, and have highlighted the lack of induced convergence rate deterioration in the abstract and Section 1.\\n\\n>In addition, the estimated energy consumption of BNN training obtained from using QKeras is very rough, which makes the improvement in energy consumption less convincing. The authors can mainly improve the paper's strength by prototyping the proposed BNN training scheme on an embedded CPU and measuring real-world performance and power.\\n\\nWe chose to use QEnergy since it provides platform-agnostic energy estimates. While we acknowledge that a platform-specific implementation would have allowed us to obtain more accurate energy consumption figures than reported in the paper, we believe that our QEnergy-derived estimates are more useful from a high-level perspective in quantifying relative energy changes in a platform-agnostic manner. We have rephrased our description of the energy estimator in Section 6 to emphasize this point.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \">The proposed method seems like a combination of existing technical methods.\\n\\nWe would like to emphasize that this work does not represent simply a combination of approximation techniques but, as detailed in Sections 1, 4 and 5, a number of novel insights into BNNs' robustness to particular quantization schemes resulting in careful selection of these methods. In order to more clearly highlight our work's novelty, we now compare our use of approximation vs previous non-binary neural network training works in the newly added Table 1.\\n\\n>There is a lack of comparison with other low-cost binary neural network training works.\\n\\nWhile there are no prior works specifically addressing the training costs of BNNs, we compared against works targeting training cost reductions of non-binary networks in Tables 4 and 5 (now 5 and 6). In Table 5 (now 6), for example, \\\"bool $\\\\partial\\\\boldsymbol{W}$ only\\\" is equivalent to SignSGD (Bernstein et al., 2018). In order to further highlight the benefits of our BNN-specific approach, we have added results for $l_1$ batch normalization in the style proposed by Wu et al. (2018b), i.e. Section 5.2 step 1 only, to Tables 4 and 5 (now 5 and 6). Furthermore, we now compare our use of approximation vs previous non-binary neural network training works in the newly added Table 1.\\n\\n>(1) There is a difference in the memory consumption in Table1 and in section 5.1 (1.67 MiB or 1.41 MiB ?). The authors may check this.\\n\\nThank you for pointing out this discrepancy. We have amended the value in Section 5.1 to the correct 1.67 MiB.\\n\\n>(2) As for the perspective of overall design, it\\u2019s better to emphasize the trade-offs between the importance of different variables to the overall training and the choose of the data type.\\n\\nThese tradeoffs can be found in Tables 4 and 5 (now 5 and 6). We have rephrased the text related to these tables in Section 6 to better emphasize the tradeoffs.\\n\\n>(3) It\\u2019s better to add some comparisons with other low-cost binary neural network training works.\\n\\nPlease find our response to this comment above.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \">Need more experiments on the actual training time between two BNNs which are trained with quantized and full-precision gradient weight.\\n\\nWe did not include training time results since the purpose of this work is to reduce the resource costs of BNN training, not to accelerate it. We did, however, include a subset of our experiments' training curves in Appendix A.2 to show that our method does not result in convergence rate deterioration vs the baseline. To further support this claim, we now include the training curves for all of our experiments in Appendix A.2, and have highlighted the lack of induced convergence rate deterioration in the abstract and Section 1.\\n\\n>Comparison with similar works in recent BNNs.\\n\\nWhile there are no prior works specifically addressing the training costs of BNNs, we compared against works targeting training cost reductions of non-binary networks in Tables 4 and 5 (now 5 and 6). In Table 5 (now 6), for example, \\\"bool $\\\\partial\\\\boldsymbol{W}$ only\\\" is equivalent to SignSGD. In order to further highlight the benefits of our BNN-specific approach, we have added results for $l_1$ batch normalization in the style proposed by Wu et al. (2018b), i.e. Section 5.2 step 1 only, to Tables 4 and 5 (now 5 and 6). Furthermore, we now compare our use of approximation vs previous non-binary neural network training works in the newly added Table 1.\\n\\n> What does it mean B variables in Algorithm 2 of lines: 6, 9, and 10, and how does it influence the performance of training?\\n\\n$B$ is the batch size, which we have clarified in Section 3. We exemplified the impact on accuracy, memory and energy of varying $B$ in Figure 2.\\n\\n>In figure 2, the legends should be put in the figure (not in the caption). It is better to follow.\\n\\nThank you for raising this point, which we have now addressed for all of our plots.\\n\\n>It would be nice to have the training time and accuracy in the large-scale dataset of ImageNet.\\n\\nImageNet accuracy results can be found in Table 5 (now 6). While we did not include ImageNet training time results since the purpose of this work is to reduce the resource costs of training, not to accelerate it, we did include a subset of our experiments' training curves in Appendix A.2 to show that our method does not result in convergence rate deterioration vs the baseline. To further support this claim, we now include the training curves for all of our experiments in Appendix A.2, and have highlighted the lack of induced convergence rate deterioration in the abstract and Section 1.\"}",
"{\"title\": \"General Responses\", \"comment\": \"We thank the reviewers for their positive assessment of our work and helpful suggestions for improvement.\\nPlease find our responses to specific questions directly after each review.\\n\\nReviewers questioned the lack of comparison with other low-cost BNN training works. We would like to kindly point out that there are no prior works specifically addressing the training costs of BNNs. As a result, we compared against prior works targeting training cost reduction for non-binary networks in Tables 4 and 5 (now 5 and 6). We now include additional results within the same tables to further highlight the benefits of our BNN-specific approach.\\n\\nWe have also added training curves to Appendix A.2, now covering all experiments reported in the paper, to more robustly highlight that our proposals do not result in convergence rate deterioration vs Courbariaux \\\\& Bengio's standard approach.\"}",
"{\"title\": \"Official Blind Review #1\", \"review\": [\"This paper proposes a novel method to make the training Binary Neural Networks with low-memory and low-energy by modifying the backpropagation and forward process. To this end, the paper binarized weight gradients, change batch normalization layer for removing full-precision the inputs, and utilize quantization to accelerate the whole training BNNs procedure.\", \"The paper targeted one of the problems in Binary Neural Networks and provided experiments as well as source codes to the proof of efficiency. This is the most significant contribution of the paper.\", \"However, there are the following concerns:\", \"Need more experiments on the actual training time between two BNNs which are trained with quantized and full-precision gradient weight.\", \"Comparison with similar works in recent BNNs.\", \"What does it mean B variables in Algorithm 2 of lines: 6, 9, and 10, and how does it influence the performance of training?\", \"In figure 2, the legends should be put in the figure (not in the caption). It is better to follow.\", \"It would be nice to have the training time and accuracy in the large-scale dataset of ImageNet.\", \"In conclusion, the paper addresses the novel idea for the training improvement of Binary Neural Networks in low-memory and low-energy. However, there are many concerns aforementioned.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"review\", \"review\": \"#### Comments\", \"summary\": \"The authors proposed a low-cost binary neural network training strategy exhibiting sizable memory footprint reductions and energy savings. The methods include binarizing weight gradients, modifying the forward and backward batch normalization operations and using power-of-two activation gradients and reduced-precision floating-point data. The experimental results show some improvement memory footprint reductions and energy savings vs standard approach.\", \"strength\": \"-- The authors carried out a relatively sufficient experimental analysis, and evaluated across multiple models, data sets, optimizers and batch sizes\\n-- Storage and energy consumption based on hardware models increases the completeness and credibility of conclusions.\\n-- The experiment results seem that the proposed method achieves good performance in memory footprint reductions and energy savings.\", \"weakness\": \"-- The proposed method seems like a combination of existing technical methods.\\n-- There is a lack of comparison with other low-cost binary neural network training works.\", \"comments\": \"(1)\\tThere is a difference in the memory consumption in Table1 and in section 5.1 (1.67 MiB or 1.41 MiB ?). The authors may check this.\\n(2)\\tAs for the perspective of overall design, it\\u2019s better to emphasize the trade-offs between the importance of different variables to the overall training and the choose of the data type.\\n(3)\\tIt\\u2019s better to add some comparisons with other low-cost binary neural network training works.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"This work proposes a low-cost BNN training scheme to reduce memory consumption and improve energy efficiency. Compared to the traditional BNN training algorithm, the proposed scheme achieves a significant reduction in memory footprint and energy consumption on multiple datasets.\", \"review\": \"I agree with the key contributions listed in the paper, especially the binarization of weight gradients and activations. The paper is well written and clearly articulates a contribution to the literature. The proposed BNN training scheme can have a significant practical impact. The experimental evidence is provided for several standard image classification tasks. Most of the related works are cited. The paper does not contain a theory part, but wherever possible, equations are provided to illustrate how the method works.\", \"concerns\": \"This paper includes a detailed empirical evaluation of the proposed BNN training scheme. The major concern is that the proposed low-cost BNN training scheme can cause a nontrivial accuracy degradation (2.25%) as shown in Table 5. The tradeoff between accuracy and memory footprint/energy consumption is not carefully evaluated. For example, a smaller network model with fewer parameters and activations can be trained using the baseline BNN training scheme to reduce memory and energy consumption. Besides, the baseline networks (BinaryNet and ResNet) used for comparison are out-of-date. Many recent works [1-3] propose new BNN architectures, which improve the accuracy of BNNs significantly. It is useful to justify the effectiveness of the proposed scheme for these SOTA BNN architectures.\\n\\nTo evaluate the energy consumption of the traditional and the proposed BNN training scheme, the authors assume that the two training schemes have the same convergence rate. Although the traditional BNN training scheme consumes higher power during training, it might take fewer epochs to reach the same accuracy. Therefore, the saving in energy consumption can be lower than the reported numbers. In addition, the estimated energy consumption of BNN training obtained from using QKeras is very rough, which makes the improvement in energy consumption less convincing. The authors can mainly improve the paper's strength by prototyping the proposed BNN training scheme on an embedded CPU and measuring real-world performance and power.\", \"reasons_for_score\": \"In general, I like the idea of enabling low-cost BNN training by identifying unnecessary high-precision data. However, the improvement numbers presented in the paper need better justification. I would consider raising my score if the authors could address the aforementioned concerns.\\n\\n[1] Bi-Real Net: Enhancing the Performance of 1-bit CNNs With Improved Representational Capability and Advanced Training Algorithm\\n\\n[2] ProxyBNN: Learning Binarized Neural Networks via Proxy Matrices\\n\\n[3] ReActNet: Towards Precise Binary Neural Network with Generalized Activation Functions\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"I think this paper is very interesting and if some minor comments are addressed, it should be accepted for presentation at ICLR21\", \"review\": \"I think this a very good contribution to ICLR given the topic and the quality of the submission (originality, contribution to the state of the art, experimental evidence, e\\n\\n Some of the strong points of the submission are summarized as follows, along with some points for clarification\\n\\n1.\\tEnabling training on any embedded device (SoC, FPGA, micro-controller) is one of the holy grails for edge AI and IoT. As the authors mention, this also intersects with other domains such as federating learning privacy by design systems. The authors provide and ample motivations of the importance of this work, and some of the applications edge AI might enable, as well as the current challenges.\\n2.\\tThe state of the art (despite the previous comment) contextualizes the subject matter in a succinct but comprehensive manner. Although there are certain aspects that could be improved, such as including a table outlining in a clearer manner the contributions of the authors in this context.\\n3.\\tThe comparison with the traditional training method is clear. However, I would like to know if the authors have made an ablation study to assess whether or not the use of batch normalization would have an effect on the accuracy of the proposed models. Do you provide experimental evidence of the lack of degradation due to the use of l1 batch normalization? These two aspects are not mentioned in the next nor provided in the supplementary sections.\\n4.\\tThe experimental design is good, showing a careful analysis to validate the proposal and several ablation studies to assess the memory footprint reductions and its effects on the training of various models\\n5.\\tThe foundations for the method are presented in great detail in a formalized manner and provides sufficient elements (i.e. experiments) to assess the validity of the proposed approach.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
9_J4DrgC_db | Deep Coherent Exploration For Continuous Control | [
"Yijie Zhang",
"Herke van Hoof"
] | In policy search methods for reinforcement learning (RL), exploration is often performed by injecting noise either in action space at each step independently or in parameter space over each full trajectory. In prior work, it has been shown that with linear policies, a more balanced trade-off between these two exploration strategies is beneficial. However, that method did not scale to policies using deep neural networks. In this paper, we introduce Deep Coherent Exploration, a general and scalable exploration framework for deep RL algorithms on continuous control, that generalizes step-based and trajectory-based exploration. This framework models the last layer parameters of the policy network as latent variables and uses a recursive inference step within the policy update to handle these latent variables in a scalable manner. We find that Deep Coherent Exploration improves the speed and stability of learning of A2C, PPO, and SAC on several continuous control tasks. | [
"reinforcement learning",
"exploration",
"latent variable models"
] | Reject | https://openreview.net/pdf?id=9_J4DrgC_db | https://openreview.net/forum?id=9_J4DrgC_db | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"yDlq6P7F6my",
"ygNJxL40MKO",
"LHMXslwAuQf",
"Cc_XUWx5kc6",
"4d5lKJ1CMlX",
"Lu4nLzPL7p9",
"3aruvU45Jh8",
"jeF5t42RPFd",
"nscZAmpeQGQ",
"fD09cgHo5Zx",
"QEkFnN7VACb"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040358508,
1606303066223,
1605550958577,
1605550729848,
1605550603389,
1605549552664,
1605549304009,
1603914295508,
1603846082989,
1603797524279,
1603787707198
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3412/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3412/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3412/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3412/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3412/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3412/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3412/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3412/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3412/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3412/AnonReviewer4"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"Unfortunately some of the reviewers' reactions to the author feedback won't be visible to the authors.\\nThe reviewers highly appreciated the replies and revision of the paper\", \"pros\": [\"The paper renders Generalized Exploration tractable for deep RL.\", \"The idea is applicable to many DRL methods and is potentially very valuable to deal with the headaches associated to DRL.\"], \"cons\": [\"R2 and R4 are still concerned about whether 'smart' exploration will always be advantageous, and whether the added complexity is a good trade-off for the (potentially) better performance. A comparison to 'pure' exploration would still be insightful.\", \"the new 'SAC with Deep Coherent Exploration' only partially addresses the concerns of R2 and R4, especially in terms of performance\", \"While the paper has improved drastically during the reviewing process, there are still a few too many doubts.\"]}",
"{\"title\": \"Manuscript updated\", \"comment\": \"We would like to note that we have now updated the manuscript. In particular, we describe a way to use coherent exploration with SAC that is more consistent with the approach for the on-policy methods, and yields slightly better results. The approach also allows us to update the precision $\\\\Lambda$ of the search distribution rather than set it heuristically. The major changes are in Sec 4.2, the bottom row of Fig 1, and the associated discussion in 5.1. We have not yet been able to fully address the comments regarding presentation, which we'll finish for a camera-ready version.\"}",
"{\"title\": \"Response to R4\", \"comment\": \"We appreciate your time and comments. We are happy that you found our method\\u2019s scalability an important contribution and our approach to derive non-Markov policy gradients valuable. We hope to address your critiques below.\\n\\n1. Relationship to GE (van Hoof et al., 2017): We acknowledge that GE (van Hoof et al., 2017) is an important characteristic of our method, but we believe that our recursive and analytical update for non-Markov policy gradient and the architecture of injecting noise in the last layer are valuable additions. As these two characteristics make GE more scalable for DRL algorithms. We also perform detailed ablations that more precisely explain how the various aspects of our method contribute to the performance difference compared to \\u2018NoisyNets\\u2019 and \\u2018PSNE\\u2019. \\n2. SAC-related concerns: We propose a new formulation in the general response and we will provide the experimental results soon. The degradation of our method (only in HalfCheetah) could be due to the \\u201cheuristic\\u201d approach in SAC, however, it does show slight improvements over the other two Mujoco tasks (Walker and Ant).\\n3. Limitation of experiments to MuJoCo tasks: As in our reply to R2, we want to clarify that our proposal concerns an undirected exploration method. Directed exploration methods target \\u2018hard exploration\\u2019 problems like Montezuma\\u2019s revenge, where undirected methods do not have a chance. Still, we believe that improving the exploration behavior of undirected methods is a relevant challenge: undirected methods are commonly used due to their relatively low complexity and easier implementation. Also, even though all experiments are conducted using the MuJoCo physics simulator, the different tasks differ significantly in nature. \\n4. Explanation about step-based and trajectory-based exploration: We agree we could make them clearer and we will make the change in the updated version of the paper.\\n\\nThank you for your time and feedback and we look forward to further discussion.\"}",
"{\"title\": \"Response to R3\", \"comment\": \"We are appreciated for your time and your positive comments. We are happy that you found our method\\u2019s scalability an important contribution and our approach to derive non-Markov policy gradients valuable. We hope to respond to your suggestions as follows.\\n\\n1. Moving the section introducing DRL algorithms to the appendix and instead provide an introduction of policy gradients: We agree with your suggestion and we will work on improving the presentation of our contents.\\n2. Enlarging the font size in the figures: We agree and we will increase the font size in the updated version of the paper.\\n\\nThank you for your time and feedback and we look forward to further discussion.\"}",
"{\"title\": \"Response to R2\", \"comment\": \"Thank you for your time and comments. We are happy that you found our approach novel and our method mathematically solid. Please consider the following rebuttals to your concerns.\\n\\n1. SAC with Deep Coherent Exploration: We propose a new formulation in the general response and we will provide the experimental results soon. \\n2. Claimed contribution: We agree that the first bullet point in the introduction is not a contribution of the paper. This is one of three key properties that, as we stated in the paper, $\\\\textbf{together}$ define our contribution. We will further clarify that in the paper. \\n3. Coherent-SAC and complexity in policy learning: In our current Coherent-SAC formulation, relatively little complexity is added, as the only complexity added is sampling the parameters and adapting the variance. The new Coherent-SAC formulation we are currently working on will add some complexity, but less than the on-policy method as we only need the per time-step marginal, rather than conditioning on the history so far. This key difference is due to the way SAC optimizes its policy. Since SAC tries to make its policy close to the softmax of the Q-function, the probability of action given the history is naturally not involved. Thus the recurrent inference is then not necessary.\\n4. Significance of the proposed approach in practice: PPO is also a popular and widely-used DRL algorithm and we believe the increased performance of coherent-PPO compared to PPO can also be valuable in practice.\\n5. Motivation of SAC setting different from the formulation in section 4.1: As stated in the general response, we attempted to stay close to (Plappert et al., 2018) but are currently working on a more integrated version of coherent-SAC.\\n6. Performance of coherent-SAC: We expect a more integrated version of Coherent-SAC to yield better results. We expect to share these later this week. \\n7. What's the value of hyper-parameters \\u03b1 and \\u03b4: We set \\u03b1=1.01 and \\u03b4=\\u03c3=0.1 as in Parameter Space Noise for Exploration (Plappert et al., 2018).\\n8. Comparison of the complexity of different exploration strategies: Each update has a complexity linear in the number of time steps, and our method requires an inverse of a matrix that grows with the number of weights d_last in the last layer, so scaling cubically in d_last with a naive algorithm. \\n9. Poor performance and tuning of NoisyNet-A2C(PPO), PSNE-A2C(PPO): Firstly, NoisyNet-A2C and PSNE-A2C perform similarly compared to A2C, except that PSNE-A2C performs poorly in Ant-v2. It\\u2019s not clear why this happens but similar performance is also reported in Plappert et al., 2018, showing that PSNE is very sensitive to hyperparameters. For NoisyNet, its performance for continuous control tasks has not been reported before. For PPO, we believe there are two reasons for the uncompetitive performance of NoisyNet and PSNE. The first reason is that both NoisyNet and PSNE have more parameters than our method as they perturb the policy over all layers. As a result, the updated policy can diverge faster from the old policy and hence meet the KL constraint in PPO much earlier, resulting in much fewer updates than the vanilla PPO. We observed this reduction in update steps in PPO in our experiments. Moreover, since PSNE adapts the scale \\u201chard\\u201d and heuristically, the difference could be even severe. The second reason concerns the variance of the gradient estimates. As shown in our paper, NoisyNet\\u2019s gradient estimates have much higher variance, thus leading to more oscillating updates of the parameters and hence could result in more different policy in terms of KL divergence. Lastly, we did not explicitly tune the hyperparameters of all methods. But we will consider doing that. \\n10. The experiments on a single domain (Mujoco) seems not convincing enough: \\nFirst, we consider the Mujoco environments to be quite different from each other. The fact that they use the same underlying physics simulation does not change that in our opinion. Second, we want to clarify that our proposal concerns an undirected exploration method. Directed exploration methods target \\u2018hard exploration\\u2019 problems like Montezuma\\u2019s revenge, where undirected methods do not have a chance. Still, we believe that improving the exploration behavior of undirected methods is a relevant challenge: undirected methods are commonly used due to their relatively low complexity and easier implementation. \\n\\nAgain, we want to thank you for your time and comments and we look forward to further discussion.\"}",
"{\"title\": \"Response to R1\", \"comment\": \"Thank you for taking the time to read our paper and provide us with feedback. We are glad that you found our approach valuable and our experiments insightful. We hope to address your concerns below.\\n\\n1. SAC with Deep Coherent Exploration: We propose a new formulation in the general response and we will provide the experimental results soon.\\n2. Detailed balance within the Markov chain: We might not have explained this clearly enough in the paper. The generative process of exploration noise for a particular set of parameters is the same as for on-policy method, and thus preserves detailed balance. However the base distribution (parameterized by mu, Lambda), is set (learned or provided by a heuristic), eq. (5) ensures all marginals are equal to this base distribution. \\n\\nAgain, we want to thank you for your time and comments and we look forward to further discussion.\"}",
"{\"title\": \"General response for SAC\", \"comment\": \"We thank all reviewers for your detailed and valuable comments. Here we would like to respond to one common concern about SAC, including how we integrate SAC with our method and the empirical results.\\n1. Integrating SAC with Deep Coherent Exploration: Except for the different ways of policy update in on-policy methods and off-policy methods, we chose this specific heuristic approach to combine our method with SAC in the beginning because this exact same way has been applied and was proven effective in Parameter Space Noise for Exploration (Plappert et al., 2018). Note that this is only for the policy update, for exploration, it is still unchanged and the same with on-policy methods. \\n2. However, we agree with the reviewers that our integration of SAC is more heuristic than our integration of on-policy methods like A2C and PPO. Considering that, we propose a new mathematical formulation of Coherent-SAC. To do that, we closely follow the idea of policy update in SAC: we now optimize our policy by minimizing the KL divergence of the marginalized policy (the policy that could have been sampled on average) at step t to the softmax distribution of Q-function. This integration is natural to both Deep Coherent Exploration and SAC. Moreover, this integration with SAC is more efficient than with on-policy methods: the Gaussian marginalization does not require matrix inversion. This integration will also allow the variance of the distribution over parameters to be learned rather than heuristically set, which we expect will improve results. \\n3. Empirical results of Coherent-SAC: We are currently working on the experiments and expect to add and show the results in the paper in a short time. If this method performs as we expect, we will update the corresponding text in our paper.\"}",
"{\"title\": \"A promising exploration method that would be of interest to many in the community.\", \"review\": \"I would like to thank the authors of \\\"Deep Coherent Exploration For Continuous Control\\\" for their valuable and interesting submission.\\n\\nSummary of the paper\\n--\\n\\nThe basis of this work is van Hoof et al., 2017; there, \\u201cGeneralized Exploration\\u201d views policy parameters as being drawn from a per-trajectory Markov chain. Experience is collected with a different set of parameters at each timestep, corresponding to steps along the chain.\\nThe authors of this work introduce \\u201cDeep Coherent Exploration\\u201d, which scales to deep reinforcement learning methods.\", \"the_main_contributions_are\": \"1. Simplifying the setting by modeling just the parameters in the last layer.\\n2. A recursive, analytic expression for marginalizing over the last-layer parameters, useful for obtaining low-variance gradients with on-policy methods.\\n3. Detailed recipes for incorporating the method into on-policy methods (A2C and PPO) as well as an off-policy method (SAC).\\n\\nAssessment\\n--\\nThis work explores an important problem in RL and proposes a promising method that would be of interest to many in the community, and I think it would be a valuable contribution to ICLR.\\n\\n-- The positives -- \\n\\nThe paper is well written, and does a great job of introducing the reader to the relevant concepts and situating itself in the literature.\\nThe empirical results for the on-policy methods are really strong and clearly demonstrate the value of this approach. Additionally, the ablation experiments were very insightful.\\nThe detailed appendix makes me confident that readers would be able to easily reproduce the method.\\n\\n-- The concerns --\", \"the_story_for_off_policy_methods_seems_almost_unrelated\": \"the generative model is much more restrictive (isotropic noise), the optimization method is based on a heuristic (that subsequent policy parameters should be separated by a fixed distance in the action distribution), detailed balance isn't maintained within the Markov chain, and the experimental results aren\\u2019t as strong as those of the on-policy settings.\\n\\nSuggestions\\n--\\nIt might make more sense to reframe this as an on-policy method and explicitly address the off-policy case as a limitation. Would the authors consider this alternative?\\n\\nI tentatively score this paper as accept, and looking forward to the rebuttal to calibrate with the other reviewers.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Complex approach with little improvement\", \"review\": \"Summary: This paper focuses on undirected exploration strategies in reinforcement learning. Following the prior work, this paper proposes an exploration method unifying the step-based and trajectory-based exploration. The authors propose to perturb only the last(linear) layer of the policy for exploration, instead of perturbing all layers of the policy network. Also, the authors use analytical and recurrent integration for policy updates. Experiments show that the proposed exploration strategy mostly helps A2C, PPO and SAC in three Mujoco environments.\", \"clarity\": \"This paper is generally written clearly. Some details need more clarification as pointed out in 'Cons'.\", \"originality\": \"As far as I know, the proposed technique is novel in the literature of undirected exploration. But for the three bullet points in section 1, the first point of \\\"Generalizing Step-based and Trajectory-based Exploration\\\" should not be one of the main contributions of this paper, because this paper follows the formulation of policy in van Hoof et al. (2017) and the latter proposed the generalized exploration connecting step-based and trajectory-based exploration. The work can be viewed as an extension of van Hoof et al. (2017) with a deep policy network.\", \"significance\": \"The proposed method is mathematically solid, but the main concern lies in empirical performance. Nowadays SAC is the state-of-the-art and generally used method for continuous control tasks and it is more advanced than A2C and PPO. But the proposed method does not obviously improve the performance of SAC while inducing much more complexity in policy learning. Therefore the significance of the proposed approach in practice might be limited.\", \"pros\": \"*The authors provide detailed mathematical derivation (in the main text and the appendix) to support the proposed method.\\n*The proposed method significantly outperforms the baselines when investigating the on-policy methods A2C and PPO.\\n*The authors provide ablative studies about hyper-parameter values and components of the proposed method with A2C.\", \"cons\": \"*In section 4.2, \\\"we maintain and adapt a single magnitude \\u03c3 for the parameter noise\\\". What's the motivation of this setting different from the formulation in section 4.1?\\n*In section 5, why the advantage of the proposed method is poor with SAC? What's the value of hyper-parameters \\u03b1 and \\u03b4? Is the proposed method sensitive to these hyper-parameter choices? \\n*In section 5, apart from the comparison of the performance of the learned policy, the comparison of the complexity (which might be measured by wall time to learn the policy?) of different exploration strategies can also be interesting. \\n*In the first two rows of Figure 1, why the baseline methods NoisyNet-A2C(PPO) and PSNE-A2C(PPO) even significantly underperform the vanilla A2C(PPO)? The intuition is that introducing exploration strategies will mostly help the agent learns more quickly. Is it possible that the baselines are not tuned well?\\n*The experiments on a single domain (Mujoco) seems not convincing enough. It will be better if there are experiments on other more complicated domains.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Review\", \"review\": \"This paper presents a method to combine step-based exploration with trajectory-based exploration (in the form of action-space noise and parameters-space noise) in continuous MDPs, which is scalable to deep RL methods.\\n\\nThe paper is overall well-written and easy to follow. The Introduction and Related-work sections are good and clear.\\nSection 3 could benefit from some proof-reading. In particular, Section 3.2 is quite dense. I think it would be unhelpful for the reader not already familiar with the discussed algorithms, and on the other hand redundant for those familiar with them. I would consider moving it to the appendix, and instead provide a more high-level description of policy-gradient methods, without getting into the specific details of PPO vs SAC vs A2C. Also consider that this section uses terms which are not explicitly defined (Q-function and Advantage function) again, making it less approachable or clear for readers less familiar with RL.\\nOne other minor (and technical) issue is that the font used in the figures (legend, axis titles, etc) is very small, barely readable even in 150%.\\n\\nWhile the underlying theoretical ideas are not novel (as the authors mention, the basic approach here is following Hoof et al. 2017), there is an important contribution in the scalability of the method, as well as in its evaluation on \\\"standard\\\" benchmark for continuous RL against some other strong baselines. Another important advantage of the approach is that while the policy is non-markov (due to the \\\"global\\\" trajectory-based exploration or coherence), the policy gradients can still be estimated in a more-or-less standard, step-based, way, thanks to analytical integration of the \\\"latent\\\" variables (basically the parameters of the last layer), hereby overcoming the challenge of high variance in PG estimate for non-markov policies.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Deep Coherent Exploration For Continuous Control\", \"review\": \"Summary:\\n\\nThis paper proposes Deep Coherent Exploration that unifies step-based exploration and trajectory-based exploration on continuous control. There exists a prior work that bridges a gap between the two exploration methods for linear policies, and this paper generalizes the prior work for various deep RL methods: on-policy (A2C, PPO) and off-policy (SAC). Finally, Deep Coherent Exploration enhances the performance of baseline algorithms and has better performance than prior works (NoisyNet, PNSE) on Mujoco tasks.\", \"pros\": [\"For combining the proposed method with on-policy learning, this paper derives the log-likelihood of whole trajectory recursively.\", \"For on-policy methods (A2C, PPO), the proposed method has large performance gain on Mujoco tasks.\"], \"cons\": [\"The idea of this paper directly follows GE [van Hoof et al., 2017] and is not much different from GE.\", \"For SAC, the proposed method is not much effective and it even degrades the performance of the HalfCheetah task.\", \"The paper focuses on exploration, but the experiments only focus on the return performance of simple Mujoco tasks.\", \"In order to show the superiority of the proposed method, additional experiments on pure exploration or sparse rewarded tasks are needed.\"], \"minor_concerns\": [\"In background, there is no explanation about step-based and trajectory-based exploration.\", \"For the off-policy case, there is insufficient explanation for why they use single sigma and the connection point of the proposed method and eq (5).\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
BMua55nUyyt | Median DC for Sign Recovery: Privacy can be Achieved by Deterministic Algorithms | [
"Jiyuan Tu",
"Weidong Liu",
"Xiaojun Mao"
] | Privacy-preserving data analysis becomes prevailing in recent years. It is a common sense in privacy literature that strict differential privacy can only be obtained by imposing additional randomness in the algorithm. In this paper, we study the problem of private sign recovery for sparse mean estimation and sparse linear regression in a distributed setup. By taking a coordinate-wise median among the reported local sign vectors, which can be referred to as a median divide-and-conquer (Med-DC) approach, we can recover the signs of the true parameter with a provable consistency guarantee. Moreover, without adding any extra randomness to the algorithm, our Med-DC method can protect data privacy with high probability. Simulation studies are conducted to demonstrate the effectiveness of our proposed method. | [
"Median-of-means",
"divide-and-conquer",
"privacy",
"sign recovery"
] | Reject | https://openreview.net/pdf?id=BMua55nUyyt | https://openreview.net/forum?id=BMua55nUyyt | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"0mig0o1HjbZ",
"lrkzU3TYWIu",
"W1cnkX6oFcP",
"yqApCLbSfA9",
"-rcll8AVSLQ"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040513766,
1604427832056,
1603834662406,
1603671665405,
1602636296554
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3411/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3411/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3411/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3411/AnonReviewer4"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"The paper considers a problem of weak mean estimation under a differential privacy like constraint. Specifically, estimating the signs of a (sparse) mean, and not the actual values.\\n\\nThe reviewers brought up a number of concerns, including the weak privacy guarantee (a type of average-case privacy). Other lesser concerns include inaccuracies in comparisons with the literature and lack of interest in the algorithm/method itself.\\n\\nAs there was no response from the authors, there was little further discussion afterwards, and the reviewers remained in their opinion to reject the paper.\"}",
"{\"title\": \"Median DC for Sign Recovery: Privacy can be Achieved by Deterministic Algorithms\", \"review\": \"This paper considers the problem of private sign recovery for sparse mean estimation and sparse linear regression in a distributed setting. The paper proposes taking a coordinate-wise median among the reported local sign-vectors and gives its theoretical guarantees. Furthermore, the paper states that this is the first deterministic algorithm with a provable high-probability privacy guarantee.\", \"i_tend_to_vote_for_rejecting_this_paper_for_the_following_reasons\": \"1. The problem considered in this paper is neither exciting nor complex. Furthermore, the algorithm proposed is quite natural. I agree these are some good results, but maybe not good enough for ICLR.\\n\\n2. It is not appropriate to say this is the first deterministic algorithm with a provable high-probability privacy guarantee. In fact, utilizing robust estimators, as combined with propose-test-release (PTR) is a very basic technique in the literature of differential privacy. See Section 3.2 and 3.3 in the textbook (https://privacytools.seas.harvard.edu/files/privacytools/files/complexityprivacy_1.pdf). The intuition behind PTR is that for a robust estimator (like median), although the global sensitivity is huge, the local sensitivity is only large in some corner cases. And the local sensitivity is negligible with high probability, where the randomness is drawn from the dataset's generation. Using robust estimators (which is deterministic), which gives privacy guarantees with high probability, is exactly the start point of these algorithms. Therefore, I do not find it appropriate to state it is a new observation. By the way, using the median is also a very classic idea in the robust estimation, for example, estimating high-dimensional Gaussians by utilizing Tukey median.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Paper on robust algorithms for sign recovery.\", \"review\": \"==== Sumary of the problems considered and paper contribution\\n\\nThis paper studies the problem of sign recovery for sparse mean estimation and sparse linear regression. They show that a median based divide and conquer algorithm has high utility (as measured by power, false discovery rate, and positive and negative false discovery rates) and robustness properties. They discuss the privacy implications of these robustness properties, rigorously showing that the deterministic algorithm they define satisfies random differential privacy, where the probability (over the choice of dataset) of not satisfying privacy tends to 0. They show experimentally that their algorithm has low communication and performs well, even when compared to algorithms with no robustness properties.\\n\\n==== Comments \\n\\nThe paper touches on some interesting questions around robustness and privacy. They essentially design a robust algorithm, subdividing the data to produce averages, reducing the signal by reporting on the sign of each answer, then using the median to give the final answer. Their privacy guarantee amounts to showing that this process is very stable to outliers. The authors remark at one point that this stability implies that they could likely use a procedure like propose test release to give an actually differentially private version. I wonder why they didn\\u2019t do this? I\\u2019d be very interested to see how it performed.\\n\\nI think the strength of the stability statements Proposition 1, Proposition 2 and corollary 1 gets a bit lost in the vagueness of the wording. It should be made clear that the randomness is over X, and X\\u2019 is any worst case neighbouring dataset. This is significantly stronger than if the randomness was over the pair. This stronger version means that the algorithm is stable against outliers, including those caused by unclean data, or malicious participants. This is particularly interesting in Corollary 1, which discusses group privacy. Also \\u201crandom differential privacy\\u201d, which has appeared in the literature previously, seems like the notion the authors are looking for. However, it seems like a little bit of a stretch to call this \\\"roughly (0,0)-DP\\u201d.\\n\\nThe paper is well written, it clearly states its theoretical guarantees and discusses intuition. In particular the privacy guarantees are clearly stated, and their difference to pure DP highlighted.\\u00a0\\u00a0I thought Section 2.2 was especially well-written. I\\u2019m not an expert on sparse DP algorithms but it seemed to discuss prior work well, and highlight how this work is different, as well as why previous work did not immediately imply a solution. \\n\\nIt looks like the pooled mean does really well for false discovery rates, why is this? \\n\\n==== Presentation\\n\\nThe paper is well written. I think the privacy aspect would be more compelling if the authors ran the propose test release version, which would actually be DP. It seems like this experiment would be interesting whether or not the propose test release version did well.\", \"small_comments\": \"Definition 1 should be all measureable subsets, not just all subsets.\", \"typo\": \"second sentence in intro: \\u201clarge quantities of sensitive data are..\\u201d should be \\u201clarge quantities of sensitive data have been..\\u201d\\nIt would have been helpful to have the definition of the sgn of a sparse vecctor in the introduction, I was a little unsure exactly what was meant. \\nWhen defining the distribution space in (4), I think it would be helpful to state why the condition is required, not just that it is mild. It\\u2019s for Berry Esseen, yes?\\nIt would be nice to see a discussion of the privacy implications of the five-fold cross-validation.\", \"the_second_sentence_of_the_abstract\": \"it is not just common sense that randomness is required, unless the function is constant, randomness is provably required.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"ICLR 2021 Conference Paper3411 AnonReviewer2\", \"review\": \"Summary:\\nThe paper considers the sign recovery problem in a distributed setting with privacy constraints. The paper proposes an algorithm \\u201cmedian divide-and-conquer (Med-DC)\\u201d which takes the sign locally in each machine and then takes the median globally. The paper shows that in the sparse mean estimation setting, Med-DC is correct with high probability under some assumptions and Med-DC satisfies a weaker notion of differential privacy proposed by the paper. The paper then extends this algorithm to the sparse linear regression setting.\", \"concerns\": \"The main concern about the paper is whether the \\u201cweaker\\u201d differential privacy notion proposed by the paper makes sense. \\n\\nTo me, this modification on differential privacy is very big, but it is not well discussed in the paper. In my understanding, the main difference between this notion and the standard differential privacy is that the standard differential privacy considers the worst case of the input but the modified notion considers the average case when inputs are assumed to be sampled from a distribution. From a single user\\u2019s perspective, this new privacy notion makes sense only when the user trusts that other users will follow the mechanism.\\n\\nIn this new notion, if you simply take the median of n binary numbers sampled from Bernoulli half, the median (without any noise) is private because with high probability, a single number flip won\\u2019t flip the median. And this shows that it might be very easy to design deterministic algorithms under this new privacy notion. It seems to me that the main reason for Med-DC being both deterministic and private in this new notion is the weakness of the new privacy notion but not the well-design of Med-DC.\", \"reasons_for_score\": \"I vote for rejection. As discussed in the concerns, the modification of the differential privacy notion makes the problem very different and this modification is not well justified in the paper.\", \"typos_and_minor_comments\": \"(1) Page 2: the definition of the supp(v) is not very clear. It\\u2019s written as {condition 1 | condition 2}. What are the elements in the set?\\n(2) Page 2: when a_n = O(b_n) and b_n = O(a_n), you can write a_n = \\\\Theta(b_n) as the standard notation.\\n(3) Page 6, first paragraph of section 3.1, \\u201crecovery\\u201d -> \\u201crecover\\u201d\\n(4) I find the name \\u201cdivide-and-conquer\\u201d does not fit the algorithm very well. A divide-and-conquer algorithm breaks down a problem into many simpler sub-problems. The algorithm in the paper has data points partitioned because of the distributed setup of the problem.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Median DC review. Decision: Reject.\", \"review\": \"The paper gives \\\"almost private\\\" algorithms for problem of sign recovery of mean vector and of linear regression. The techniques follow their general framework of Median DC, which is similar to the well-known median-of-means approach. They give theoretical guarantees for the same, along with empirical results comparing with known differentially private algorithms and their non-private counterparts.\", \"strengths\": \"Strong technical justifications in terms of proofs and experiments.\", \"weaknesses\": \"I don't totally understand the message of the paper. Why is loss of worst-case privacy acceptable? This is what happens in the paper, and without a good justification for that, I don't see why such a loss of privacy is okay. Even privacy has not been formally defined. Apart from group privacy, no other important properties of the \\\"definition\\\" have been proved.\", \"nitpicks\": \"-The title is a bit misleading, even though it is obvious that DP cannot be achieved by deterministic algorithms.\\n-You talk about certain papers doing mean estimation on page 3, but you forget to mention a recent result by Kamath et al on private mean estimation of heavy-tailed distributions, which also uses median of means framework, and may have connections to robustness.\\n-At the bottom of page 5, you say that the method is roughly regarded as a (0,0)-DP algorithm. That's a very brave thing to say. I wouldn't claim things like that without formal justifications.\", \"score_justification\": \"The weaknesses of this paper just outweigh the positives. A good motivation/justification of the privacy model could have helped to get a better score.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
RrIqhkFEpec | Isometric Autoencoders | [
"Amos Gropp",
"Matan Atzmon",
"Yaron Lipman"
] | High dimensional data is often assumed to be concentrated on or near a low-dimensional manifold. Autoencoders (AE) is a popular technique to learn representations of such data by pushing it through a neural network with a low dimension bottleneck while minimizing a reconstruction error. Using high capacity AE often leads to a large collection of minimizers, many of which represent a low dimensional manifold that fits the data well but generalizes poorly.
Two sources of bad generalization are: extrinsic, where the learned manifold possesses extraneous parts that are far from the data; and intrinsic, where the encoder and decoder introduce arbitrary distortion in the low dimensional parameterization. An approach taken to alleviate these issues is to add a regularizer that favors a particular solution; common regularizers promote sparsity, small derivatives, or robustness to noise.
In this paper, we advocate an isometry (i.e., local distance preserving) regularizer. Specifically, our regularizer encourages: (i) the decoder to be an isometry; and (ii) the encoder to be the decoder’s pseudo-inverse, that is, the encoder extends the inverse of the decoder to the ambient space by orthogonal projection. In a nutshell, (i) and (ii) fix both intrinsic and extrinsic degrees of freedom and provide a non-linear generalization to principal component analysis (PCA). Experimenting with the isometry regularizer on dimensionality reduction tasks produces useful low-dimensional data representations. | [
"manifold learning",
"autoencoders"
] | Reject | https://openreview.net/pdf?id=RrIqhkFEpec | https://openreview.net/forum?id=RrIqhkFEpec | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"DViZLN3lFcB",
"P2guORp4MO",
"UsW8QemGWfl",
"Gql1JraYP6m",
"UeMB6fTTg7F",
"9l_1C3NBCX2",
"3jVtnIGgDzm",
"I8i1y6kgG4F",
"6JVblQOb58z"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040463973,
1605884097981,
1605883948022,
1605883877857,
1605883768367,
1604477255656,
1603997706530,
1603897677470,
1603852582399
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3409/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3409/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3409/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3409/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3409/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3409/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3409/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3409/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"The paper introduces a new formulation for learning low-dimensional manifold representations via autoencoder mappings that are (locally) isometric by design. The key technical ingredient is the use of a particular (theoretically motivated) weight-tied architecture coupled with isometry-promoting loss terms that can be approximated via Monte Carlo sampling. Representative results on simple manifold learning experiments are shown in support of the proposed formulation.\\n\\nThe paper was generally well-received; all reviewers appreciated the theoretical elements as well as the presentation of the ideas.\\n\\nHowever, there were a few criticisms. First, the fact that the approach requires Monte Carlo sampling in very high dimensions automatically limits its scope. Second, the experiments seemed somewhat limited to simple (by ICLR standards) datasets. Third and most crucially, the approach lacks a compelling-enough use case. It is not entirely clear what local isometry enables, beyond nice qualitative visualizations (and moreover, what the isometric autoencoder provides over other isometry-preserving manifold learning procedures such as ISOMAP). Some rudimentary results are shown on k-NN classification and linear SVMs, but the gains seem to be in the margins. \\n\\nThe authors are encouraged to consider the above concerns (and in particular, identifying a unique use case for isometric autoencoders) while preparing a future revision.\"}",
"{\"title\": \"Authors response to reviewer 3\", \"comment\": \"**Q: I have to say that the argument of global isometry is too ambitious. In the theory and method part, this method only guarantees the local isometry. The authors do mention that \\\"a local isometry which is also a diffeomorphism is a global isometry\\\" on page 3 bottom paragraph. However, there's no discussion about the \\\"diffeomorphism\\\" in the following sections.**\\n\\n**A:** The diffeomorphism of the decoder is encouraged by the reconstruction term, as mentioned at the bottom of page 4 below Lemma 2. Therefore as the decoder and the encoder are smooth by construction (we use Softplus instead of ReLU) we are encouraging our decoder to be a diffeomorphism.\\n\\n**Q: Is there any possibility that the author can provide one more toy example for the global isometry when the data lie on some manifold shape? This would strongly support the global argument.**\\n\\n**A:** We would like to point out that figure 3, and table 1 already provide such an example. In figure 3, the S surface and the swiss-roll surfaces are both isometric to the euclidean plane. Isometries in particular preserve densities (unit det jacobian) and since the surfaces are sampled uniformly (left column), global isometry can be inspected by observing the (almost) perfect planar rectangle with uniformly sampled points, as shown in the I-AE column. This qualitatively demonstrates the (global) isometry. Table 1, quantitatively demonstrates low isometric distortion that, together with the bijectivity in these examples, implies the encoder is close to a perfect (global) isometry. \\n\\n**Q: From my understanding, the distance in the original space is the Euclidean distance without considering the local geometry of the data. Can the author provide some comments on this? The distance in the original space should be the geodesic distance when arguing the Isometry.**\\n\\n**A:** The distance in the original space is **not** the euclidean distance. The decoder $f$ is encouraged to have an orthogonal differential (equation 6). This local condition, if satisfied everywhere, guarantees isometry, that is geodesic distances on the manifold are represented by straight lines in the latent space. \\n\\n**Q: The experiment of the data visualization is somewhat weak. The benefit of the Isometry Autoencoder is not well addressed. The t-SNE is well used for visualization with almost nothing wrong. The only benefit comes when arguing the \\\"even\\\" sampling. Can the author provide some comments on this about why we need the \\\"even\\\" in the visualization?**\\n\\n**A:** The \\u201ceven\\u201d structure is useful when the data is uniformly sampled from the data manifold. For example, in the COIL20 dataset, we expect to get equidistant rings, i.e., with even spacings. Notice that t-SNE is limited only to embed data points and does not generalize to unseen data, as opposed to autoencoders. Lastly, to further highlight the benefit in the isometric AE beyond visualization, we provide an additional experiment in the revision, where we use the (unsupervised) embedded vectors to train a simple classifier (KNN or linear SVM), see section 4.3 in the revised paper. As can be seen in that experiment, I-AE embeddings result in higher accuracies of the simple classifiers, over a wide range of parameters (such as latent dimension). The fact that I-AE outperforms other AEs showcase its ability to better preserve and model the structure of the high dimensional data manifold.\"}",
"{\"title\": \"Authors response to reviewer 1\", \"comment\": \"**Q: The experiments mainly rely on visualization but fail to give some numeric results. For instance, can IAE be useful for semi-supervised learning (Like VAEs)? How can we practically make use of the isometry property in applications other than data visualization?**\\n\\n**A:** Thank you for this comment. We provide an additional experiment in the revision, where we use the (unsupervised) embedded vectors to train a simple classifier (KNN or linear SVM), see section 4.3 in the revised paper. As can be seen in that experiment, I-AE embeddings result in higher accuracies of the simple classifiers, over a wide range of parameters (such as latent dimension). The fact that I-AE outperforms other AEs showcase its ability to better preserve and model the structure of the high dimensional data manifold.\"}",
"{\"title\": \"Authors response to reviewer 4\", \"comment\": \"**Q: For the synthetic data, I am not sure I understand why they did not chose something of high dimension? Maybe I am missing something, but would it be impossible to generate, say, a 50 dimensional manifold in 100 dimensions? Maybe the triangulation part will be challenging, but that is not the only way to compare between the various algorithms.**\\n\\n**A:** We experiment with standard manifold examples commonly used in existing literature. The surfaces chosen already pose a challenge to previous methods while allowing quantitative and qualitative evaluation. We believe triangulating a high dimensional manifold would be exponential in the dimension and therefore very challenging. \\n\\n**Q: Your algorithm does manifold learning. Why not, for instance, take all the images corresponding to some fixed digit (e.g. \\\"3\\\"), which is presumably close to a low (but definitely more than 2....) dimensional manifold, and see how well your manifold learning algorithm reconstructs them?**\\n\\n**A:** Thank you for this comment. To quantitatively evaluate how well our algorithm learns manifolds of higher dimension we provided an additional experiment in the revision, where we use the embedded vectors to train a simple classifier (KNN or linear SVM), see section 4.3 in the revised paper. The fact that I-AE outperforms other AEs showcase its ability to better preserve and model the structure of the high dimensional data manifold.\"}",
"{\"title\": \"Authors response to reviewer 2\", \"comment\": \"**Q: The experimental results also do not indicate how the embeddings learned using the proposed method perform on downstream classification tasks, for instance. This comparison would be useful to have to compare the usefulness of the embeddings.**\\n\\n**A:** Thank you for this comment. We have added an additional experiment in the revised paper (section 4.3), that evaluates our embeddings for downstream classification tasks. As you can see in the results, we outperform other autoencoder methods.\\n\\n**Q: It is not clear why one should require that L2 distances in the high dimensional space are the same as distances in the latent space.**\\n\\n**A:** Isometry does not mean L2 distances in the high dimensional space are preserved in the latent space; rather, **geodesic distances** over the manifold are preserved. The geodesic distances indeed locally coincide with the L2 metric in the ambient Euclidean space, however, the geodesic distance for distant points measures the shortest path restricted to the manifold. \\n\\n**Q: The numerical results on reconstruction error that the authors present in the appendix do not indicate any reason to prefer isometric AEs over other baselines that are considered. In case there is a setting where isometric AEs can be shown to model the data manifold better than regular AEs, that is not highlighted in the current draft.**\\n\\n**A:** We refer the reviewer again to section 4.3 in the revised paper for another quantitative justification showing isometric AEs better model manifold data compared to other AEs.\\n\\n**Q: The authors claim that isometric autoencoders would \\\"evenly sample the manifold\\\" which is a little confusing, since the sampling of the data manifold is separate from the technique used to model the data (regular AEs vs isometric AEs).**\\n\\n**A:** We meant that isometric autoencoders evenly sample the manifold in the sense they do not shrink or expand the space, locally they behave as orthogonal linear transformations. We added a clarification in the paper. \\n\\n**Q: The projection operator that is used to define the pseudoinverse of the encoder is not necessarily a function, since there could possibly be many points on the manifold that correspond to the same L2 distance from the point being projected. Are there further assumptions on the structure of the data manifold that prevent this from being the case?**\\n\\n**A:** For closed manifolds there is always the closest point, although indeed not necessarily unique. So technically the projection can be made a function (i.e., choose one closest point), although not continuous everywhere. In any case, for points close enough to a smooth manifold uniqueness holds. We added a clarification in the revised paper (see just before Definition 1). \\n\\n**Q: Estimating the L_iso term seems to require a distribution over the latent space R^d, that the authors say is computed using a fit of the latent codes g(x), x \\\\in \\\\cal X. Are the latent codes computed using the current estimate of the encoder?**\\n\\n**A:** During training, for each batch we compute the mean and standard deviation of the (current) encoded batch, and use it to define a multivariate gaussian, from which we sample.\"}",
"{\"title\": \"Official Blind Review #2\", \"review\": \"Update: I appreciate the authors addressing my concerns. I have increased my score accordingly.\", \"original_review\": \"This paper describes a new type of regularization for the parameters of an autoencoder - one that forces the decoder to be an isometry. The authors present conditions that need to be satisfied by the encoder and decoder parameters, and show empirically that the regularization terms that they propose ensure that the resulting autoencoder has an isometric decoder. The paper is well written and easy to follow.\\n\\nWhile the authors assert that forcing the decoder to be an isometry is desirable since isometries preserve distances and angles, it is not clear why that is a desirable property while modeling data on a manifold. Distances between points on a data manifold are not usually measured through L2 distances in a latent dimension, and it is not clear why one should require that L2 distances in the high dimensional space are the same as distances in the latent space. The numerical results on reconstruction error that the authors present in the appendix do not indicate any reason to prefer isometric AEs over other baselines that are considered. In case there is a setting where isometric AEs can be shown to model the data manifold better than regular AEs, that is not highlighted in the current draft.\\n\\nThe authors claim that isometric autoencoders would \\\"evenly sample the manifold\\\" which is a little confusing, since the sampling of the data manifold is separate from the technique used to model the data (regular AEs vs isometric AEs). \\n\\nThe experimental results also do not indicate how the embeddings learned using the proposed method perform on downstream classification tasks, for instance. This comparison would be useful to have to compare the usefulness of the embeddings.\", \"a_few_minor_points_of_confusion\": \"1) the notation f^{-1} is a little misleading since the encoder is not necessarily an invertible function from R^d to R^D. If the encoder mapping is restricted to the range of f then this notation is more appropriate. \\n2) The projection operator that is used to define the pseudoinverse of the encoder is not necessarily a function, since there could possibly be many points on the manifold that correspond to the same L2 distance from the point being projected. Are there further assumptions on the structure of the data manifold that prevent this from being the case?\\n3) Estimating the L_iso term seems to require a distribution over the latent space R^d, that the authors say is computed using a fit of the latent codes g(x), x \\\\in \\\\cal X. Are the latent codes computed using the current estimate of the encoder? If so is there some sort of alternating minimization happening, which holds the current estimate of the encoder fixed while computing the isometric regularization? If not, how are the latent codes computed?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Novel auto-encoder based method for manifold learning\", \"review\": \"The paper suggests a novel auto-encoder based method for manifold learning, by encouraging the decoder to be an isometry and the encoder to locally be a pseudo-inverse of the decoder. It is noted that for a linear architecture, this gives PCA, therefore, this can be seen as a nonlinear PCA approach.\\n\\nIn theorem 1, the authors claim that for the encoder-decoder solution to have the desired properties, certain equalities have to be satisfied by the local differential matrices of the encoder and decoder. This gives rise to a loss function that is combined of 3 parts: A reconstruction loss (as usual with autoencoders) plus a combination of a loss penalizing non isometric decoders, plus a loss penalizing an encoder that is not a pseudo-inverse of the decoder. This loss function is claimed to be the main technical novelty of the paper.\\n\\nIn the experimental part, the authors compare the merits of this approach on synthetically generated low dimensional manifolds in high dimensional ambient spaces, against other standard manifold learning algorithms, and show that the paper's method outperforms other method using a measure of distortion of triangle edges on a grid. They also experiment with \\\"real data\\\" (e.g. MNIST), show the merits of the proposed algorithm when visualizing the 2 dimensional bottleneck of the autoencoder. The comparison here is against other algorithms for high dimensional data visualization.\\n\\nThe overall idea and theory seem interesting. The experiments are a bit disappointing. For the synthetic data, I am not sure I understand why they did not chose something of high dimension? Maybe I am missing something, but would it be impossible to generate, say, a 50 dimensional manifold in 100 dimensions? Maybe the triangulation part will be challenging, but that is not the only way to compare between the various algorithms. As for the real data section (e.g. MNIST), I am not sure I see why you compare your algorithm against algorithms that are intended for 2-d visualization (e.g. t-SNE). Your algorithm does manifold learning. Why not,for instance, take all the images corresponding to some fixed digit (e.g. \\\"3\\\"), which is presumably close to a low (but definitely more than 2....) dimensional manifold, and see how well your manifold learning algorithm reconstructs them?\\n\\nThe editorial level of the paper is not very high, due to grammatical English mistakes. Here are examples (the list is not complete):\\np. 1 \\\"Autoencoder (AE) can also be seen\\\" => \\\"Autoencoders can also be seen\\\" or \\\"An autoencoder can also be seen...\\\"\\n\\n\\\"AE is trying to reconstruct X...\\\" - The present progressive tense is not suitable here. Maybe \\\"AE's try to reconstruct\\\"? Or \\\"AE's are designed to reconstruct...\\\" or \\\"An AE reconstructs...\\\"\\n\\np. 2 \\nManifold learning generalizeS\\n\\n\\np. 4\\n\\\"As-usual \\\" => As usual\\n\\np. 5\\n\\\"Does our suggested Loss... drives\\\" -> \\\"drive\\\"\\n\\np. 6\\nWhy is \\\"Denoising\\\" capitalized?\\n\\n\\\"In addition, we compared versus...\\\" => \\\"...compared against...\\\"\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Review\", \"review\": \"The authors propose a new version of the regularized autoencoder where they explicitly regularizes its decoder to be locally isometric and its encoder to be the decoder's pseudo inverse. Through a series of experiments and visualization, the IAE exhibits better manifold structure.\\n\\nRegarding the motivation and the math, I like the idea of isometric regularizer preserving the geometric properties in the learned manifold. The illustration in figure 1 does clearly point out the advantages of IAE over the contractive autoencoder. The math formulation primarily sticks with a linear version of the autoencoder. It would be great to get some insights for a non-linear counterpart.\\n\\nRegarding the experiments, indeed the authors successfully show the IAE converges its decoder to be an isometry and the proposed regularizer promotes more favoured manifold. However, the experiments mainly rely on visualization but fail to give some numeric results. For instance, can IAE be useful for semi-supervised learning (Like VAEs)? How can we practically make use of the isometry property in applications other than data visualization?\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"The theories are nice but the experiment doesn't support the theory enough.\", \"review\": \"Strength:\\n1. This paper provided a novel method to train a local isometric autoencoder, which can preserve the local Euclidean distances well between the original space and the latent space.\\n2. The theories are well presented and explained pretty well. Also, Isometry is a very important property in several cases including manifold-learning, etc. \\n3. Apart from the typos and several tiny errors, the overall writing is sound and smooth.\", \"weakness\": \"1. I have to say that the argument of global isometry is too ambitious. In the theory and method part, this method only guarantees the local isometry. The authors do mention that \\\"a local isometry which is also a diffeomorphism is a global isometry\\\" on page 3 bottom paragraph. However, there's no discussion about the \\\"diffeomorphism\\\" in the following sections. \\n2. Also, the first experiment (3D $\\\\rightarrow$ 2D) only supports the local isometry. Due to the fact that the distance is computed only based on the triangular meshes between edges. Is there any possibility that the author can provide one more toy example for the global isometry when the data lie on some manifold shape? This would strongly support the global argument.\\n3. From my understanding, the distance in the original space is the Euclidean distance without considering the local geometry of the data. Can the author provide some comments on this? The distance in the original space should be the geodesic distance when arguing the Isometry.\\n4. The experiment of the data visualization is somewhat weak. The benefit of the Isometry Autoencoder is not well addressed. The t-SNE is well used for visualization with almost nothing wrong. The only benefit comes when arguing the \\\"even\\\" sampling. Can the author provide some comments on this about why we need the \\\"even\\\" in the visualization?\", \"some_other_minor_comments\": \"1. There's a typo in the last line of the first page. The encoder should be $R^D\\\\rightarrow R^d$ with $d<D$. Similar to the inverse $R^d\\\\rightarrow R^D$. \\n2. The differential of the decoder should be the Jacobian matrix right? This would be more clear than just mentioning differential.\\n3. Table 1 should be Figure 1. Also, can the author provide more details about this figure? Is this figure for illustration only or this result is actually trained and plotted? The form \\\"evenly\\\" is a strong word that needs more explanation of the definition.\\n4. The order of Figure 3 and Figure 2 is messed up.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
b4Phn_aTm_e | Pseudo Label-Guided Multi Task Learning for Scene Understanding | [
"Sunkyung Kim",
"Hyesong Choi",
"Dongbo Min"
] | Multi-task learning (MTL) for scene understanding has been actively studied by exploiting correlation of multiple tasks. This work focuses on improving the performance of the MTL network that infers depth and semantic segmentation maps from a single image. Specifically, we propose a novel MTL architecture, called Pseudo-MTL, that introduces pseudo labels for joint learning of monocular depth estimation and semantic segmentation tasks. The pseudo ground truth depth maps, generated from pretrained stereo matching methods, are leveraged to supervise the monocular depth estimation. More importantly, the pseudo depth labels serve to impose a cross-view consistency on the estimated monocular depth and segmentation maps of two views. This enables for mitigating the mismatch problem incurred by inconsistent prediction results across two views. A thorough ablation study validates that the cross-view consistency leads to a substantial performance gain by ensuring inference-view invariance for the two tasks. | [
"Multi-task learning",
"monocular depth estimation",
"semantic segmentation",
"pseudo label",
"cross-view consistency"
] | Reject | https://openreview.net/pdf?id=b4Phn_aTm_e | https://openreview.net/forum?id=b4Phn_aTm_e | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"bbNFL0a58M",
"MFcZO95s5am",
"MLT1MwsI8yr",
"K8s1O2GJ8U-",
"vd8T39GylIi",
"R9Iqc8HgmuR",
"zkWSLA5i8gP"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040513845,
1605585580989,
1605585474759,
1605585108909,
1603942828338,
1603927547043,
1603914758346
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3408/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3408/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3408/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3408/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3408/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3408/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"All reviewers agree that the paper overclaims its contributions both in the main text and in the title, and given also the limited novelty and scope it is not suggested for publication.\"}",
"{\"title\": \"Response to Reviewer3\", \"comment\": \"1. Scope: It seems the paper works specific on the left-right warping consistency of semantic label and depth, while the major scope told in the title and introduction is about pseudo label for general multiple task learning, which is byfar not shown in the worked experiments. It needs to be adjusted.\\n\\n\\u2192 I agree with your opinion about the title. We will revise the title so as to reflect the main contribution of this work. As mentioned above, the cross consistency loss used in the paper can be applied to all tasks in which stereo image pairs are available. We will conduct more experiments by applying it to all tasks.\\n\\n2. Method: The major methodology is using obtain consistency losses by warping depth and semantic with respect to stereo output. The warped loss containing 6 terms each through enumeration, are all of them useful ? Is there a lot of redundency, what happened if droping half of it. The ablation shows using concistency is useful, while the usefulness of each term and how balance between these losses has not been proven. \\n\\n\\u2192 Thank you for your suggestion. The results of dropping some losses were attached to the ablation study, and hyper-parameters for six losses are described in experiments. As you suggested, we will investigate the usefulness of each term in more details.\\n\\n3. Experiments Comparing to other SoTA algorithms, it seems for depth, the results are comparable to many existing algorithms, and for semantic it is hard to compare against other SoTA semantic algorithms such as HRNet etc.. In my opinion, MTL has two benefits either differet tasks can help the output results, another is unifying tasks into single network for more efficient inference. It might be better to also compare about the running speed and Flops for performing multiple tasks to better support the idea. \\n\\n\\u2192 While SoTA monocular depth estimation methods usually rely on simple architectures according to monocular depth estimation literatures, the SoTA segmentation methods are based on very complicated architectures. We guess our segmentation performance is not as good as SoTA segmentation algorithms since our multi-task learning architecture rely on the simple encoder-decoder architecture. As you suggested, we will compare the runtime and Flops for performing multiple tasks.\\n\\n4. Writting Overall, it is easy to follow, however the figures are too small making it hard to diagnose the difference between multiple predictions. \\n\\n\\u2192 Thank you for your suggestion. We had to put the figures small due to a page limit. Instead, more results with a relatively large size are provided in the Appendix.\"}",
"{\"title\": \"Response to Reviewer2\", \"comment\": \"1. The problem setting in this work, which requires stereo image pair for learning network, is different from the prior work (e.g., Liu et al 2019). The proposed method also uses a pre-trained stereo-matching networks and confidence estimation network, which essentially included additional prior information/training data. Therefore, it is not surprising to see the performance improvement over the prior work.\\n\\n\\u2192 Your comment is right. Indeed, we attempted to show the effectiveness of the cross-consistency loss using pseudo depth labels and confidence maps. When comparing \\u2018MTAN\\u2019 (Liu et al 2019) of Table 3 and \\u2018Baseline\\u2019 in Table 5, we can see that even the baseline of using the pseudo depth labels only outperforms 'MTAN\\u2019 (Liu et al 2019). The performance gain becomes higher by leveraging the cross-consistency loss as reported in Table 5.\\n\\n2. While the proposed cross-view loss improves the segmentation, the overall design is quite complicated. There are many hyper-parameters in the loss functions, and it is unclear how their values would generalize to other datasets that are not road scenes. Moreover, based on the ablative study, the improvement over the noisy depth setting is marginal (Table 4 and 5). Also, it is unclear whether all those terms make significant contributes to the performance improvements, and sometimes it even hurts the performance. \\n\\n\\u2192 I agree that the hyper-parameter tuning may not be easy. But, we found that the proposed loss is not much sensitive to hyper-parameters in the datasets used in experiments. We will investigate the generalization capability in diverse scenes.\\nAs shown in the ablation study of Table 4 and 5, it was observed that when adding the depth cross consistency view loss, the monocular depth accuracy is improved over the baseline. It was also seen that when the segmentation cross consistency view loss leads to the performance improvement in the segmentation. However, under the simple multi-task architecture sharing the encoder, boosting the depth and segmentation accuracy significantly is quite challenging. We will investigate to use more sophisticate multi-task architectures provided in recent works to address this issue.\\n\\n3. The experimental evaluation is a bit lacking in the following aspects.\\nThis work only uses two road-scene datasets for evaluation, but those two datasets are quite similar to each other, and hence do not have sufficient diversity. The other work typically also use NYU-v2, which is an indoor dataset. Can the author also report their method's performance on NYU-v2?\", \"the_evaluation_on_the_cityscapes_dataset_seems_unconvincing_due_to_two_issues\": \"First, the depth performance in Table 3 seems very different from the prior literature, and in particular, the Abs values are much worse than the SOTA results. Secondly, it lacks comparisons with Jha et al. 2020, which achieves better performance than the results shown in Table 3.\\nThe improvement from the proposed loss seems very marginal in the ablative study. Different combinations of proposed components typically give minor or mixed improvement on segmentation or depth estimation. It is unclear how effective of the confidence weighting or using multiple consistency constraints.\\n\\n\\u2192 Thank you for your valuable suggestion. Unfortunately, it is infeasible to apply the cross consistency loss to NYU-v2 dataset that provides only a single image, not stereo image pairs. We will seek various datasets for conducting experiments for ensuring the diversity, as you suggested.\\nIn the prior literatures, the performance in the Cityscapes dataset was usually measured with disparity maps obtained using the hand-crafted stereo matching method, semi-global matching (SGM) [Hirschmuller, 2008]. We found that the SGM disparity map used for the performance evaluation contains disparity values that are 0 or close to 0 at many parts. Since these values are meaningless, we excluded these values in the performance evaluation. Note that for a fair comparison, we measured the performance of all methods under the same setup. The code will be publicly available soon. As suggested, we will include the comparison with Jha et al. 2020.\"}",
"{\"title\": \"Response to Reviewer1\", \"comment\": \"1. Overall, the paper does not have much novelty in my opinion. Joint learning of depth and semantic segmentation is clearly not new, and the paper does not provide new or particular insight towards this combined learning.\\nThe use of pseudo label itself is nowadays quite common in the vision community. And, the pseudo labels are used in the paper in a pretty trivial way in my opinion.\\n2.The cross-view consistency across two views in a stereo setup is not new neither. It has been intensively used in the monocular depth estimation. In addition, this constraint is applicable to any individual task and does not seem to fit into the multi-task learning context, which is the main focus of this paper. I would expect specific insights in making use of pseudo labels towards solving the depth and semantics predictions; otherwise, any other tasks such as moving objects segmentation\\n\\n\\u2192 Thank you for your analysis and suggestions. In our humble opinion, while existing works [Godard et al., 2017; 2019; Watson et al., 2019; Chen et al., 2019] for imposing the cross-view consistency across two views use predicted disparity maps, our method attempts to impose the cross-view consistency by making use of pseudo disparity maps and their associated confidences. We showed the effectiveness of the cross-view consistency based on the pseudo label and its confidence through the ablation study in Table 4. We could see that when the cross-consistency loss was applied by using (incomplete) predicted disparity, it does not improve performance. We believe this is a difference from the existing papers [Godard et al., 2017; 2019; Watson et al., 2019; Chen et al., 2019]. Nevertheless, we will continue to investigate the applicability of using pseudo labels in the depth and semantics predictions, as you suggested.\\n\\n3. The current title is too general, so much so that the main arguments made by the paper are not reflected in the title; I believe that the left-right consistency brought about by the pseudo ground truth depth is the main claim of the paper.\\n\\n\\u2192 I agree with your opinion about the title. We will revise the title so as to reflect the main contribution of this work. As mentioned above, the cross consistency loss used in the paper can be applied to all tasks in which stereo image pairs are available. We will conduct more experiments by applying it to all tasks.\"}",
"{\"title\": \"This paper presents a framework which leverages pseudo depth ground truth to train monocular depth and semantic segmentation networks.\", \"review\": \"The paper presents a framework to learn depth prediction and semantic segmantation jointly; the key idea lies in making use of the pseudo depth label from stereo to provide supervision and as a means to enforce cycle consistency between the left and right views of the stereo.\", \"reasons_for_scores\": \"overall the paper is rather incremental and the idea is neither novel nor significant in my opiniont. I do not see interesting or deep insight from the paper towards the depth and semantic segmantation tasks.\", \"pros\": [\"First, the paper is clearly written and easy to follow. The proposed framework is pretty straightforward.\", \"The idea of joint learning depth and semantic segmentation is good considering their tightly coupled nature.\", \"The use of cross-view consistency as a constraint is good.\"], \"cons\": [\"Overall, the paper does not have much novelty in my opinion. Joint learning of depth and semantic segmentation is clearly not new, and the paper does not provide new or particular insight towards this combined learning.\", \"The use of pseudo label itself is nowadays quite common in the vision community. And, the pseudo labels are used in the paper in a pretty trivial way in my opinion.\", \"The cross-view consistency across two views in a stereo setup is not new neither. It has been intensively used in the monocular depth estimation. In addition, this constraint is applicable to any individual task and does not seem to fit into the multi-task learning context, which is the main focus of this paper. I would expect specific insights in making use of pesudo labels towards solving the depth and semantics predictions; otherwise, any other tasks such as moving objects segmentation\", \"The current title is too general, so much so that the main arguments made by the paper are not reflected in the title; I believe that the left-right consistency brought about by the pseudo ground truth depth is the main claim of the paper.\"], \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Several concerns on the problem setting and experimental evaluation\", \"review\": \"The paper presents a joint learning strategy for simultaneous semantic segmentation and monocular depth estimation. The main idea is to exploit stereo pairs in training and introduce pseudo-depth label estimated from pre-trained stereo-matching networks. Given the pseudo-depth with confidence estimation, the method proposes a cross-view consistency loss for both depth and semantic predictions, which augments the standard segmentation loss. The proposed method is evaluated on KITTI and Cityscapes datasets with comparisons to prior work and ablative study.\", \"strengths\": [\"The proposed cross-view loss on semantic segmentation seems interesting and effective on two benchmarks, which improves the segmentation performance.\", \"The overall method achieves competitive performance on semantic segmentation and monocular depth estimation on the KITTI and Cityscapes.\"], \"concerns\": [\"The contribution of this work to the multi-task learning is a bit overclaimed. The targeted problem of the paper is solely on joint semantic segmentation and monocular depth estimation. Based on the model and loss design, it is non-trivial to extend them to other scene understanding tasks.\", \"The problem setting in this work, which requires stereo image pair for learning network, is different from the prior work (e.g., Liu et al 2019). The proposed method also uses a pre-trained stereo-matching networks and confidence estimation network, which essentially included additional prior information/training data. Therefore, it is not surprising to see the performance improvement over the prior work.\", \"While the proposed cross-view loss improves the segmentation, the overall design is quite complicated. There are many hyper-parameters in the loss functions, and it is unclear how their values would generalize to other datasets that are not road scenes. Moreover, based on the ablative study, the improvement over the noisy depth setting is marginal (Table 4 and 5). Also, it is unclear whether all those terms make significant contributes to the performance improvements, and sometimes it even hurts the performance.\", \"The experimental evaluation is a bit lacking in the following aspects.\", \"This work only uses two road-scene datasets for evaluation, but those two datasets are quite similar to each other, and hence do not have sufficient diversity. The other work typically also use NYU-v2, which is an indoor dataset. Can the author also report their method's performance on NYU-v2?\", \"The evaluation on the Cityscapes dataset seems unconvincing due to two issues: First, the depth performance in Table 3 seems very different from the prior literature, and in particular, the Abs values are much worse than the SOTA results. Secondly, it lacks comparisons with Jha et al. 2020, which achieves better performance than the results shown in Table 3.\", \"The improvement from the proposed loss seems very marginal in the ablative study. Different combinations of proposed components typically give minor or mixed improvement on segmentation or depth estimation. It is unclear how effective of the confidence weighting or using multiple consistency constraints.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"too large scope for a relative limited experiments\", \"review\": \"This paper propose to use depth pseudo ground truth (generated with a pretrained stereo network) as augmented information to help a joint prediction network for depth and segment estimation.\", \"pros\": \"Multi task learning is an important direction to explore, and left-right consistency has shown to be very useful in depth estimation Godard et.al 2017. The extension using similiar idea to depth and semantic is reasonable, and experiments verify the effectiveness of proposed strategies.\", \"cons\": \"1) Scope: \\nIt seems the paper works specific on the left-right warping consistency of semantic label and depth, while the major scop told in the title and introduction is about pseudo label for general multiple task learning, which is byfar not shown in the worked experiments. It needs to be adjusted. \\n\\n\\n2) Method:\\nThe major methodology is using obtain consistency losses by warping depth and semantic with respect to stereo output. The warped loss containing 6 terms each through enumeration, are all of them useful ? Is there a lot of redundency, what happened if droping half of it. The ablation shows using concistency is useful, while the usefulness of each term and how balance between these losses has not been proven. \\n\\n\\n3) Experiments\\nComparing to other SoTA algorithms, it seems for depth, the results are comparable to many existing algorithms, and for semantic it is hard to compare against other SoTA semantic algorithms such as HRNet etc.. In my opinion, MTL has two benefits either differet tasks can help the output results, another is unifying tasks into single network for more efficient inference. It might be better to also compare about the running speed and Flops for performing multiple tasks to better support the idea. \\n\\n4) Writting\\nOverall, it is easy to follow, however the figures are too small making it hard to diagnose the difference between multiple predictions.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
EQtwFlmq7mx | Stochastic Proximal Point Algorithm for Large-scale Nonconvex Optimization: Convergence, Implementation, and Application to Neural Networks | [
"Aysegul Bumin",
"Kejun Huang"
] | We revisit the stochastic proximal point algorithm (SPPA) for large-scale nonconvex optimization problems. SPPA has been shown to converge faster and more stable than the celebrated stochastic gradient descent (SGD) algorithm, and its many variations, for convex problems. However, the per-iteration update of SPPA is defined abstractly and has long been considered expensive. In this paper, we show that efficient implementation of SPPA can be achieved. If the problem is a nonlinear least squares, each iteration of SPPA can be efficiently implemented by Gauss-Newton; with some linear algebra trick the resulting complexity is in the same order of SGD. For more generic problems, SPPA can still be implemented with L-BFGS or accelerated gradient with high efficiency. Another contribution of this work is the convergence of SPPA to a stationary point in expectation for nonconvex problems. The result is encouraging that it admits more flexible choices of the step sizes under similar assumptions. The proposed algorithm is elaborated for both regression and classification problems using different neural network structures. Real data experiments showcase its effectiveness in terms of convergence and accuracy compared to SGD and its variants. | [
"sppa",
"convergence",
"nonconvex optimization",
"implementation",
"application",
"sgd",
"algorithm",
"neural networks"
] | Reject | https://openreview.net/pdf?id=EQtwFlmq7mx | https://openreview.net/forum?id=EQtwFlmq7mx | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"LSLYD2hYRJI",
"1VbaktTPQzJ",
"ygqk6REafp",
"buj5oICT9QC",
"xYX4ikYSEuM"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040513914,
1604459931095,
1603894100264,
1603890930842,
1603519046063
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3407/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3407/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3407/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3407/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"All reviewers recommend rejection: concerns were raised in terms of technical correctness, quality of presentation and the quality of experiments. There was no rebuttal. The AC agrees with the reviewers and recommends rejection.\"}",
"{\"title\": \"the novelty is not enough\", \"review\": \"This paper revisits the stochastic proximal point algorithm (SPPA) and apply SPPA to solve nonconvex optimization problems with efficient subproblem solvers.\\n\\nFirstly, there is a work [Chen et. al 2020] which also provides the convergence result of SPPA on manifold problem and it is not the weakly convex setting.\\n\\nSecondly, the convergence rate of SPPA is 1/epsilon^2, which is the same as SGD. Regarding the convergence result, there is no advantage of SPPA against SGD and the author should have a discussion about this. The convergence rate is asymptotic and this paper does not point out which iteration we should use as the final output.\\n\\nThirdly, the convergence analysis is rather standard and lack of novelty. Moreover, the convergence result of Gauss-Newton and L-BFGS to solve the proximal subproblem should also be provided. Since the main concern for PPA-type method lies on the convergence behavior and efficiency to solve the proximal subproblem. Furthermore, in the experiment part, the comparison of running time between SPPA and SGD, ADAM, Adagrad should also be provided. \\n\\nLastly, I wonder whether the assumption 1 is reasonable, since I have not seen this assumption in other nonconvex stochastic programming papers. Authors should remark on this assumption and it would be better to put some references on this.\", \"confidence_level\": \"5, abusolutely certain.\", \"rating\": \"4: Ok but not good enough - rejection\", \"references\": \"[1] Manifold Proximal Point Algorithms for Dual Principal Component Pursuit and Orthogonal Dictionary Learning.\\nShixiang Chen, Zengde Deng, Shiqian Ma, Anthony Man-Cho So. arXiv preprint arXiv:2005.02356, 2020.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Insufficient theoretical and experimental results\", \"review\": \"This paper studies the stochastic proximal point algorithm (SPPA) for large-scale nonconvex optimization problems. The authors propose to use Gauss-Newton to perform the proximal update in nonlinear least squares and L-BFGS or accelerated gradient for generic problems. The authors derive the convergence of SPPA to a stationary point in expectation for nonconvex problems, and perform numerical experiments to showcase the effectiveness of the proposed method compared to SGD and its variants.\\n\\nThe paper is generally clear, yet the convergence analysis is mainly based on adapting Bottou et al. (2018). While the proposed methods could be significant additions to stochastic optimizers in deep learning, I found the study of the current paper is insufficient; see the cons below.\", \"pros\": [\"It is well-known that the proximal point algorithm (PPA) converges faster than gradient descent (GD), and the same holds for their stochastic counterparts. One advantage is that the step sizes in PPA and SPPA can be larger than those in GD and SGD, which can speed up convergence. The proximal steps in PPA and SPPA are however hard to perform for generic (nonconvex) problems since the proximity operator of the objective functions usually do not have closed forms. The proposal of the authors to perform efficient inner-loop optimization schemes like Gauss-Newton and L-BFGS allows approximation of such proximal steps, without much computational burden added.\"], \"cons\": [\"Theory:\", \"I found that Assumption 3 is too strong and do not think it is a standard assumption. Otherwise the constant $ c $ can be very large. This also leads to a question of why using the upper bound $ c $ is the second part of the RHS of (20) but not the first part?\", \"Also why $ \\\\sqrt{\\\\lambda_t} $ instead of $ \\\\lambda_t $ in (20)? If I did not misunderstand, it is derived from (14).\", \"As $ c $ and hence $ C $ can be very large, the bounds (9) and (10) in Theorems 2 and 3 can well be vacuous.\", \"Also the quantifier in Assumption is missing (for all $ i\\\\in \\\\lbrace 1, \\\\ldots, n \\\\rbrace $?)\", \"Another pitfall of the theoretical results of this work is that the convergence analyses of the proposed SPPA-LBFGS, SPPA-AGD and SPPA-GN are all missing, especially since this work considers nonconvex problems.\", \"Experiments:\", \"To showcase the proposed methods are really comparable to or outperform methods like SGD or Adam, numerical experiments should be performed on data sets of larger scales and much deeper networks. In particular, the regression data sets in the paper are so small that the gain of the proposed method over other baselines are so marginal.\"], \"typos\": [\"Theorem 1: do you mean the limit is equal to 0 or is finite? I guess something is missing.\", \"Proof of Theorem 2: $ L(\\\\theta_1) - \\\\mathbb{E}[L(\\\\theta_{T+1})] $ instead of $ L(\\\\theta_0) - \\\\mathbb{E}[L(\\\\theta_{T})] $\", \"(10): the LHS of the inequality should be $ \\\\alpha_t $ instead of $ \\\\alpha $\"], \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"A badly-written paper\", \"review\": \"This paper considers the stochastic proximal point algorithm for solving nonconvex nonlinear least squares optimization problems. A linearization strategy is used to accelerate the procedure and in each iteration the algorithm works by solving a linear system. Some convergence analysis for the proposed is present. Some experiments have been conducted.\\n\\nI have the following comments.\\n1. In the proposed algorithm, the authors only take one example instead of a batch of training examples to construct the gradient. This strategy often results in much large variance and slow convergence in practice.\\n\\n2. The results in Proposition 1 does not imply the convergence of the algorithm. The theoretical analysis is incremental. \\n\\n3. The numerical comparisons are not sufficient. The authors should include the comparisons with state-of-the-art second-order optimization solver such as K-FAC.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Analysis seems not rigorous\", \"review\": \"In this paper the authors study stochastic proximal point algorithm for nonconvex optimization, where the model is iteratively updated by solving a proximal optimization problem based on a randomly selected loss function. The authors develop efficient implementation for solving the proximal optimization problem: first for nonlinear least squares and then for general losses. Then the authors study the convergence rates for the developed algorithm. Upper bounds on the expected average squared gradients are developed for both constant step sizes and diminishing step sizes. Experimental results are also reported to support the algorithm in practical implementations.\\n\\nComments.\\n\\n1. The authors show that the proximal optimization problem can be efficiently solved. This is nice. However, the idea in the development of the algorithm seems standard. It seems a bit surprising that this algorithm has not been developed before.\\n\\n2. The convergence rates are a bit surprising. For example if we set $\\\\lambda=0$ in Thm 2, then eq (9) shows that the averaged gradient converges to zero, which should not happen since in this case the algorithm makes no progress.\\n\\n3. In Appendix A, the authors make two assumptions in (13) and (14). However, it remains unclear whether these two assumptions can be satisfied simultaneously. In particular, does the stationary point in (14) satisfies the sufficient decrease in (13)?\\n\\n4. I think eq (20) is not correct. The term $\\\\sqrt{\\\\lambda_t}c$ should be $c/\\\\lambda_t$. As the theoretical results depend on this inequality, the results are not correct.\\n\\n5. In Assumption 3, the authors assume the updates lie in a compact set. This can be only guaranteed if you impose a constraint on the space. However, the constraint would make the stationarity in (14) no longer hold. \\n\\n6. In Theorem 1, the equation is not complete.\\n\\n7. In eq (10), $\\\\alpha$ should be $\\\\alpha_t$\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
37Fh1MiR5Ze | A Chaos Theory Approach to Understand Neural Network Optimization | [
"Michele Sasdelli",
"Thalaiyasingam Ajanthan",
"Tat-Jun Chin",
"Gustavo Carneiro"
] | Despite the complicated structure of modern deep neural network architectures, they are still optimized with algorithms based on Stochastic Gradient Descent (SGD). However, the reason behind the effectiveness of SGD is not well understood, making its study an active research area. In this paper, we formulate deep neural network optimization as a dynamical system and show that the rigorous theory developed to study chaotic systems can be useful to understand SGD and its variants. In particular, we first observe that the inverse of the instability timescale of SGD optimization, represented by the largest Lyapunov exponent, corresponds to the most negative eigenvalue of the Hessian of the loss. This observation enables the introduction of an efficient method to estimate the largest eigenvalue of the Hessian. Then, we empirically show that for a large range of learning rates, SGD traverses the loss landscape across regions with largest eigenvalue of the Hessian similar to the inverse of the learning rate. This explains why effective learning rates can be found to be within a large range of values and shows that SGD implicitly uses the largest eigenvalue of the Hessian while traversing the loss landscape. This sheds some light on the effectiveness of SGD over more sophisticated second-order methods. We also propose a quasi-Newton method that dynamically estimates an optimal learning rate for the optimization of deep learning models. We demonstrate that our observations and methods are robust across different architectures and loss functions on CIFAR-10 dataset. | [
"learning theory",
"stochastic gradient descent",
"deep learning",
"neural networks",
"dynamical systems",
"chaos theory",
"Lyapunov exponents"
] | Reject | https://openreview.net/pdf?id=37Fh1MiR5Ze | https://openreview.net/forum?id=37Fh1MiR5Ze | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"BEJpmn_TRTJ",
"XjY-2I_UOO8",
"SpTOm6fba6t",
"X9fIP4qfOsJ",
"8gIIjSDApkV",
"pYyn68tBBn4",
"GjfqpjX20du"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040513984,
1605232695033,
1605232447749,
1605232220155,
1603961702571,
1603909652923,
1603843154004
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3406/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3406/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3406/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3406/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3406/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3406/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"The Authors study the learning dynamics of deep neural networks through the lenses of chaos theory.\\n\\nThe key weakness of the paper boils down to a lack of clarity and precision. Chaos theory seems to be mostly used to computing eigenvalues but is not used to derive meaningful insights about the learning dynamics. R2 noted, \\\"Chaos theory provides a way of computing eigenvalues but does not give much understanding on the neural network optimization.\\\". R4 noted, \\\"The authors use an insight from chaos theory to derive an efficient method of estimating the largest and smallest eigenvalues of the loss Hessian wrt the weight\\\". Hence, statements such as \\\"the rigorous theory developed to study chaotic systems can be useful to understand SGD\\\" seem unsubstantiated.\\n\\nReduced to its essence, the key contribution is (1) a method to compute the top and the smallest eigenvalue, (2) the observation that the spectral norm of the Hessian along SGD optimization trajectory is related to the inverse of the learning rate, and (3) a method to automatically tune the learning rate.\", \"let_me_discuss_these_three_contributions\": [\"The significance of the first contribution is unclear, as pointed out by R2. Indeed there are other methods (e.g. power method, Lanczos) for computing these quantities that should achieve either a similar speed or similar stability. Given the rich history of developing estimators of these quantities, a much more detailed evaluation is warranted to substantiate this claim.\", \"The core insight that the top eigenvalue of the Hessian in SGD is related to the inverse of the learning rate in the training of deep neural networks is nontrivial but is not fully novel. Closely related observations were also shown in the literature.\", \"This precise statement however indeed was not stated in the literature. This contribution could be a basis for acceptance, but the paper is not sufficiently focused on it, and the evaluation of this claim is a bit narrow in scope.\", \"Finally, there is a range array of methods to tune the learning rate. As noted for example by R3, \\\"There are numerous ideas for proposing new optimization and without careful, through comparison to baseline, well-known methods\\\", the evaluation is too limited to treat this as a core contribution.\", \"Based on the above, I have to recommend the rejection of the paper. At the same time, I would like to thank the Authors for submitting the work for consideration to ICLR. I hope the feedback will be useful for improving the work.\"]}",
"{\"title\": \"reply to review\", \"comment\": [\"Replies to \\\"cons\\\":\", \"The analysis is presented using continuous time dynamics of SGD for simplicity of exposition. All the concepts used (chaos, Lyapunov exponents) can be derived and applied on discrete time systems.\", \"We will add these to the literature discussion.\", \"The Lanczos method is an extension to the power method to calculate multiple eigenvalues. As explained in the appendix, what we are doing can also be extended to multiple eigenvalues.\", \"We disagree on the significance of the novelty.\", \"Our biggest networks are not toy models. The biggest network is a ResNet18 architecture. Our results apply to the simple Linear mean square error fit, MLPs, up to deep ResNet architectures.\", \"With figure 2 show how this simple method allows a decrease in the loss for any architecture choice and no need for LR fine-tuning. We do not claim it to be competitive to SGD+momentum/Adam.\", \"Replies to \\\"questions\\\":\", \"Our method is a power method where the rate of convergence (Lyapunov exponent) is controlled to be one (\\u201cRe-scale the learning rate\\u201d step in Algorithm 1). Doing this avoids slow convergence (small lambda) and instability (large lambda).\", \"We believe that investigating the chaotic behavior in parameter space is a very relevant question. For example, the structure of the attractor determines the properties of the ensemble of all possible solutions. And in case of chaotic systems, this structure is often non intuitive.\", \"We will amend this argument in the revision.\", \"The analysis in appendix B can in principle be generalized to other algorithms.\"]}",
"{\"title\": \"reply to weaknesses\", \"comment\": \"Reply to weaknesses:\\n1) CIFAR-10 was used in two configurations: 2 classes and 10 classes classification. Additional datasets could be added quickly. We kept it concise for the sake of the space limits. However, we do not think that they would strengthen the conclusions greatly.\\n2) The calculation of the largest Lyapunov exponent (section 2.2) is a power method. Our contribution (explained in section 3) of controlling the Lyapunov exponent to be unity avoids slow convergence (Lyapunov exponent too small) and instability (Lyapunov exponent too large).\\n3) We did not investigate extensively calculating multiple eigenvalues. The stability benefits should apply to this scenario as well.\\n4) The reviewer is correct on this simple case. However, there are a number of non intuitive behaviours of chaotic dynamical systems that would be very difficult to \\u201crediscover\\u201d from scratch.\\n5) What we found in addition is that the eigenvalues become approximately equal to the inverse of the LR.\\n6) Our adaptive optimization is a quasi-netwon method based on the eigenvalue of the Hessian. SGD+momentum and Adam are not based on the eigenvalues of the Hessian.\\n7) If more than one eigenvalue is used, the LR can be made dependent from the direction.\\n8) We believe the techniques described here have a great potential for understanding of NN optimization these additional interesting questions.\"}",
"{\"title\": \"Clarifications\", \"comment\": \"We thank the reviewer for praising that the technical derivation is sound.\", \"replies_to_the_weak_points\": \"1) The inverse of the most negative eigenvalue of the Hessian defines the chaotic timescale of the training dynamics. In the training of typical neural networks this time is short. From just a few iterations up to an epoch. The consequences of this behavior to NN optimization is certainly an interesting question. We do not see this as a weak point, we just decided to not focus on this topic in our current submission.\\n2) A large value for the first eigenvalue of the Hessian corresponds to a narrow valley. The reviewer correctly points out that our analysis shows that choosing (imposing) a learning rate affects the \\u201cwidth\\u201d of the solution. On the other hand, the Hessian can be used to calculate the locally \\u201coptimal\\u201d learning rate by Quasi-Newton. If this is done instead, the \\u201cwidth\\u201d of the final solution will be different. There is no tension between these two observations.\\n3) The Hessian vector product method returns a derivative from automatic differentiation. Our method returns finite difference second derivatives. In networks with relu activations it can make a significant difference. The density method calculates an estimate of the distribution of all the eigenvalues. Our method returns the largest one.\"}",
"{\"title\": \"A way to compute the top eigenvalues of Hessian from Lyapunov exponents\", \"review\": \"Objective of the work: The paper uses the chaotic theory to study the dynamics of SGD. It provides algorithm to compute the most positive and the most negative eigenvalues of Hessian based on analyzing the Lyapunov exponents. The paper shows that the largest eigenvalue of the Hessian similar to the inverse of the learning rate.\", \"strong_points\": \"The paper proposes an algorithm that can fast estimate the largest eigenvalues of the Hessian based on the analysis of the Lyapunov exponents. The technical derivation is sound.\", \"weak_points\": \"1. Chaos theory provides a way of computing eigenvalues but does not give much understanding on the neural network optimization. For example, what does the timescale of the most negative eigenvalue mean for the NN optimization. \\n\\n2. There are several points the paper wants to present, however they are not logically connected: the most negative eigenvalue, the largest eigenvalue of the Hessian, the relation between the largest eigenvalue of Hessian and the learning rate.\\nThe experiments show that the eigenvalues of the Hessian adapts to the learning rate, which indicates the learning rate sort of affects the Hessian, which indicates that we should not follow the largest Hessian eigenvalue if we want certain feature, i.e., large eigenvalue for a wide valley. However, the paper also propose setting the learning rate according to the Hessian leading eigenvalues. Is there first eigenvalue or first the learning rate. \\n\\n\\n3. For computing the eigenvalues of Hessian, the paper does not give sufficient discussion or experiments of comparing the proposed algorithms with other existing approaches (Hessian vector product method, the density method [1]) to verify the efficiency and the difference. \\n\\nI would not recommend the acceptance for now.\\n\\n[1] Ghorbani et al. An Investigation into Neural Net Optimization via Hessian Eigenvalue Density, ICML 2019\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"A review\", \"review\": \"### 1. Brief summary\\nThe authors use an insight from chaos theory to derive an efficient method of estimating the largest and smallest eigenvalues of the loss Hessian wrt the weights. To do that, they use nearby weight space positions, optimize for a bit (either gradient climbing or descending), check how quickly the points are departing from each other, and use that to estimate the extreme eigenvalues using a connection to Lyapunov coefficients in chaos theory. Then they use on the fly estimated largest eigenvalue to automatically tune the learning rate of SGD.\\n\\n### 2. Strengths\\n* The paper makes a connection to chaos theory which typical members of the ML community are not familiar with\\n* They derive an alternative to the usual top and bottom eigenvalue calculation methods that are employed\\n* They try their automatic LR tuning in practice\\n\\n### 3. Weaknesses and points of confusion\\n\\n1) The only dataset tested was CIFAR-10. I am not saying you need to go directly to ImageNet, but a variety of datasets would be nice to see. You do try a bunch of architectures, so why not datasets as well. You could add MNIST, Fashion MNIST and SVHN relatively quickly and it would greatly strengthen the empirical conclusions.\\n\\n2) The simplest method for estimating the top eigenvalue -- the power method -- is also linear in the number of parameters. What advantage does your method have over that?\\n\\n3) The power method tends to be unstable (in its naive implementation) when used to get the less than highest eigenvalues. Does your method suffer from similar practical instabilities?\\n\\n4) The connection between the top negative eigenvalue and the rate of departure of nearby points in the weight space from each other (the same for gradient ascent and the top eigenvalue) does not seem very surprising to me. This might not be a valid point, but it seems that it is a simple consequence of optimizing in a quadratic well with a loss of the form 1/2 x H x^T, where H is the Hessian and the x is the minimum-centered position. The highest negative eigenvalue will be the one pushing you out as exp(|lambda| t). Why do I need chaos theory to see that? I might be wrong and I'm ready to be corrected, but it seems relatively simple to derive without much chaos-theoretic baggage attached to it.\\n\\n5) Almost every stability analysis of gradient-based algorithms will include the condition on the top eigenvalue being smaller than 2/LR and a very similar analysis to what you did using chaos theory here. I'm not sure what the new insight is here. Again, please correct me if I'm wrong.\\n\\n6) It seems that what you are describing with your adaptive optimization is very similar to some existing algorithms. [1] presents the lookahead optimizer that shares many features, and many variants of SGD (such as SGD+Momentum or Adam) likely do something very similar albeit implicitly.\\n\\n7) In Equation 10 you make the B a matrix, but it turns out to be an identity rescaled by the top eigenvalue -- a scalar. I get that this is the same, but it seems a bit misleading -- I got my hopes up for a proper matrix conditioning the LR but it turned out to be a scalar. This is a minor point, no need to address it.\\n\\n8) You argue that the 2/LR top eigenvalue selection by the optimizer somehow helps explain why DL works so well. But to me the more interesting questions remain: why are such places available, and how are they reacheable from init using gradient-based algorithms.\\n\\n### 4. Some papers that seem relevant\\n[1] Lookahead Optimizer: k steps forward, 1 step back. Michael R. Zhang, James Lucas, Geoffrey Hinton, Jimmy Ba https://arxiv.org/abs/1907.08610 \\n\\n[2] Deep Ensembles: A Loss Landscape Perspective by Stanislav Fort, Huiyi Hu, Balaji Lakshminarayanan (https://arxiv.org/abs/1912.02757) studies how trajectories in the weight space diverge from different initializations.\\n\\n[3] Linear Mode Connectivity and the Lottery Ticket Hypothesis by Jonathan Frankle, Gintare Karolina Dziugaite, Daniel M. Roy, Michael Carbin (https://arxiv.org/abs/1912.05671) looks at how trajectories that start from a preoptimized point diverge with additional training.\\n\\n[4] Deep learning versus kernel learning: an empirical study of loss landscape geometry and the time evolution of the Neural Tangent Kernel by Stanislav Fort, Gintare Karolina Dziugaite, Mansheej Paul, Sepideh Kharaghani, Daniel Roy, Surya Ganguli (https://arxiv.org/abs/2010.15110 and NeurIPS 2020) also looks at how trajectories from nearby points diverge. They also look at the sensitivity to initial conditions.\\n\\n[5] The large learning rate phase of deep learning: the catapult mechanism by Aitor Lewkowycz, Yasaman Bahri, Ethan Dyer, Jascha Sohl-Dickstein, Guy Gur-Ari studies the stability of the training under finite step size and with SGD in quite some detail and it could be relevant. (https://arxiv.org/abs/2003.02218)\\n\\n[6] The Break-Even Point on Optimization Trajectories of Deep Neural Networks by Stanislaw Jastrzebski, Maciej Szymczak, Stanislav Fort, Devansh Arpit, Jacek Tabor, Kyunghyun Cho, Krzysztof Geras (https://arxiv.org/abs/2002.09572 and ICLR 2020) looks at the crucial effect of the early stages of training and the instability in it.\\n\\n### 5. Summary\\nThis paper presents a nice new method for estimating the lowest and largest eigenvalues of the DNN loss Hessian wrt weights using the divergence of nearby points in the weight space under optimization. They do this by using a chaos-theoretic language. While those methods seem useful, I do not see why chaos theory was needed to derive them. I appreciate the link and believe that more good stuff could come out of it, but as is I don't think this paper provides much new to the field on its own. However, I am not an expert on this subfield and **I am ready to revise my score** if the authors convince me otherwise.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Official Blind Review #3\", \"review\": \"Interesting approach using ideas in chaos theory to deep learning but its merit is not clear yet.\", \"summary\": \"This paper connects ideas in chaos theory to understand training dynamics of neural networks. In particular the paper uses Lyapunov exponent, divergence rate of infinitesimal close trajectory and connects them to Hessian eigenvalues. The authors show that the largest Lyapunov exponent corresponds to the most negative eigenvalue of the Hessian. Then the paper claims that this provides an efficient method to estimate the largest eigenvalue and connect to using learning rate related to largest eigenvalue.\\n\\nUsing this method, the paper claims SGD finds loss landscape regions where the largest eigenvalue of the Hessian is similar to the inverse of the learning rate. Lastly, the paper proposes a quasi-Newton method with dynamic estimation of optimal learning rate.\", \"reason_for_score\": \"While connection between ideas in chaos theory and neural network optimization is interesting and worth pursuing, I believe current work is underdeveloped in the sense that comparison to well-known methods are not provided thoroughly. Proposed method of estimating largest eigenvalue of Hessian is presented without any comparison to well-known methods such as Lanczos and proposed quasi-Newton method\\u2019s utility is unproven as is.\", \"pros\": \"The paper proposes an interesting connection between analysis in chaos theory and neural network optimization.\\n\\nPaper is clearly written and exposition to chaos theory in section 2 is a nice read. \\n\\nEmpirical observation that maximum eigenvalue of Hessian over training matching learning rate for various experimental settings is quite interesting. While I would have been interested to see similar observations for commonly used step learning rate schedules.\", \"cons\": \"The analysis is based on continuous time dynamics of SGD which is a fine toy-model but misses various interesting finite step dynamics in Neural Network training. For example [1] have shown that the largest learning rate based on eigenvalue estimation is not sufficient for explaining neural network training dynamics especially in the well-performant ones. \\n \\nOne major problem I see is missing comparison to well known literature. There are numerous analyses for studying Hessian of Neural Networks with various methods(e.g. [2, 3, 4] and references there-in), and it is hard to find why the proposed method using the Lyapunov exponent is either more interesting or useful.\\n\\nFor one thing there are various methods to estimate large eigenvalues of large matrices. For example, naively I would have used Lanczos to estimate the top few eigenvalues very efficiently using Hessian-Vector product. Why would a proposed method be better than this? \\n\\nAccording to discussion in Related work section, the benefit over power-iteration from (LeCun et al., 1993) is that it is free from choosing a running average hyperparameter, in which case the novelty of the proposed method itself does not seem significant especially with lack of analysis that directly compares one another. \\n\\nLastly, while the authors suggest that the method is efficient, most experiments are done in the toy-ish setting whereas [3] could study full Hessian spectral density of ImageNet-scale networks. \\n\\n\\nProposed quasi-Newton does not have sufficient analysis that the idea works. There are numerous ideas for proposing new optimization and without careful, through comparison to baseline, well-known methods. I believe Figure 2 is testing that the optimization works. However it is not clear with the current set of experiments whether this optimization can be useful compared to simple Adam or SGD with momentum. \\n\\t\\nWhile one could eliminate the learning rate schedule with this method, it is not clear whether the proposed quasi-Newton method provides benefit over used schedules. SGD works well without schedule, but typically schedule improves performance beyond constant learning rate. Does automatic determination of learning rate provide the benefit of custom learning rate schedule without any tuning procedure? I think this question needs to be answered for the proposed method to have impact on practitioners.\", \"questions\": \"As far as I can see, the procedure for both top eigenvalue and top-k eigenvalue is very similar to how one would estimate them using simple power iteration or Lanczos algorithm(https://en.wikipedia.org/wiki/Lanczos_algorithm) used in e.g. [2,3,4]. Could you explain how they differ? At least stochastic estimation (not using full batch) has been utilized in [3] where they study dynamics of Hessian spectral density during Training and I believe few top value estimation is much simpler to extract.\\n\\nDo the authors believe chaos in parameter space is a relevant question to answer? In the end, one is interested in neural network function and even if the parameters diverge the function output may converge since many different parameter configurations can lead to the same or similar functions. \\n\\nIn section 6, suggestion for using larger learning rate towards the end of training seems to be against the typical practice for obtaining well performing models. I believe even suggested reference (Smith et al., 2017) suggests decaying the learning rate is a good idea for generalization and mimics the effect by increasing the batch size. \\n\\nWould be interesting to find out if the analysis in Appendix B can generalize to Adaptive optimization algorithms such as RMSProp or Adam.\", \"nits_and_additional_feedback\": \"\", \"these_are_few_nits_and_feedback_to_improve_the_paper_which_were_not_critical_for_evaluation\": \"Ref (Sprott & Sprott 2003) seems to be actually a single author book, I suspect bibtex is misconfigured. \\n\\nFor the experiments section (4), it is not clear what message the experiments are conveying. The experiment setup without knowing what motivates the analysis is hard to follow, so I suggest starting with a general goal for the experiments to help the readers.\\n\\nFor second-order methods in Neural Networks in the related works section, it is worth mentioning K-FAC papers [5,6,7]\\n\\nI am not sure if the statement \\u201crobustness to learning rate choice\\u201d is correct in Section 5. In most cases, choosing a proper learning rate for a first order method is quite important and probably the single most important hyperparameter to tune. If the learning rate is too small, convergence will be too slow, if it is too large SGD can diverge. Also there\\u2019s evidence that a very small range of learning rate is critical for improving performance [1].\\n\\nFigure 3 with x, y axis labels missing\\n\\nInconsistent use of cifar10, CIFAR10, CIFAR-10 across the paper\\n\\nUnderstandable for conference submission with deadlines; content in the Appendix needs more cleaning up. (e.g. wrong quotation marks, \\u2018newton\\u2019 instead of \\u2018Newton\\u2019 etc)\\n\\n\\n[1] Lewkowycz et al., The large learning rate phase of deep learning: the catapult mechanism, arXiv:2003.02218\\n[2] Gur-Ari et al., Gradient Descent Happens in a Tiny Subspace, arXiv:1812.04754\\n[3] Ghorbani et al., An Investigation into Neural Net Optimization via Hessian Eigenvalue Density, ICML 2019\\n[4] Jastrzebski et al., On the Relation Between the Sharpest Directions of DNN Loss and the SGD Step Length, ICLR 2019\\n[5] Martens & Grosse, Optimizing Neural Networks with Kronecker-factored Approximate Curvature, ICML 2015\\n[6] Grosse & Martens, A Kronecker-factored approximate Fisher matrix for convolution layers, ICML 2016\\n[7] Ba et al., Distributed second-order optimization using Kronecker-factored approximations. ICLR 2017.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
cR91FAodFMe | Learning to Set Waypoints for Audio-Visual Navigation | [
"Changan Chen",
"Sagnik Majumder",
"Ziad Al-Halah",
"Ruohan Gao",
"Santhosh Kumar Ramakrishnan",
"Kristen Grauman"
] | In audio-visual navigation, an agent intelligently travels through a complex, unmapped 3D environment using both sights and sounds to find a sound source (e.g., a phone ringing in another room). Existing models learn to act at a fixed granularity of agent motion and rely on simple recurrent aggregations of the audio observations. We introduce a reinforcement learning approach to audio-visual navigation with two key novel elements: 1) waypoints that are dynamically set and learned end-to-end within the navigation policy, and 2) an acoustic memory that provides a structured, spatially grounded record of what the agent has heard as it moves. Both new ideas capitalize on the synergy of audio and visual data for revealing the geometry of an unmapped space. We demonstrate our approach on two challenging datasets of real-world 3D scenes, Replica and Matterport3D. Our model improves the state of the art by a substantial margin, and our experiments reveal that learning the links between sights, sounds, and space is essential for audio-visual navigation. | [
"visual navigation",
"audio visual learning",
"embodied vision"
] | Accept (Poster) | https://openreview.net/pdf?id=cR91FAodFMe | https://openreview.net/forum?id=cR91FAodFMe | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"DlA8oKiPbfG",
"kjCa6a73irM",
"xbUf-bYKu8S",
"eOlCEE5YVVd",
"zNerllBqvcN",
"e-W8aB43f8",
"u625kkWy1zz",
"hc8Bt5Qch2",
"aN5dAGl6P15",
"jRezrVYFhWW"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040468986,
1605842244195,
1605842157746,
1605842088715,
1605841967065,
1605841690705,
1604426972059,
1603928835767,
1603877015413,
1602973286750
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3405/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3405/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3405/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3405/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3405/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3405/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3405/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3405/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3405/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Poster)\", \"comment\": \"The paper considers a variant of the point-goal navigation problem in which the agent additionally receives an audio signal emitted from the goal. The proposed framework incorporates a form of acoustic memory to build a map of acoustic signals over time. This memory is used in combination with an egocentric depth map to choose waypoints that serve as intermediate subgoals for planning. The method is shown to outperform state-of-the-art baselines in two navigation domains.\\n\\nThe reviewers all agree that the paper is very well written and that the evaluations are thorough, showing that the proposed framework offers clear performance gains. The idea of combining acoustic memory as a form of map with an occupancy grid representation as a means of choosing intermediate goals is interesting. However, the significance of the contributions and their relevance are limited by the narrow scope of the audio-video navigation task, which seems a bit contrived. The paper also overstates the novelty of the work at times (e.g., being the first use of end-to-end learned subgoals for navigation). The author response resolves some of these concerns, but others remain.\"}",
"{\"title\": \"Response to reviewer 2\", \"comment\": \"We appreciate your helpful feedback.\", \"1\": \"Missing one baseline that predicts next step directly instead of waypoint.\\nAs suggested by the reviewer, we have experimentally validated that the waypoints are better than next-step actions by training our model to predict low-level actions directly instead of waypoints, and updated the 'Ablations' paragraph in Section 4 of the main paper with the results. We note that our full AV-WaN model outperforms this ablation by a large margin. For example, on Matterport3D our model achieves 28\\\\% and 15\\\\% higher SPL compared to the ablated version without waypoints for the heard and unheard sound settings, respectively.\", \"2\": \"Search for audio goal using a room-centric representation.\\nRecall that the AudioGoal task requires the agent to navigate in unmapped environments (e.g., rescue search) as is the case in other navigation tasks like PointGoal and ObjectGoal. Lacking a floor plan at the start of the episode makes it hard for the agent to identify the rooms' layout. Furthermore, our model is flexible in terms of where to set the next waypoint. If the model finds strong cues to enter a room based on its observations then it will set waypoints that will take it towards the door of the room and then inside (as the agent gradually builds the map of the environment). \\n\\nFinally, a brute-force room search could be applicable in one- or two-room apartments (although the agent still needs to find the exact location of the goal, see Fig. 3), but it is highly inefficient in large environments like Matterport3D (Chang et al., 3DV 2017) where one scene has more than 22 rooms on average.\"}",
"{\"title\": \"Response to reviewer 3\", \"comment\": \"We appreciate your positive feedback.\", \"1\": \"Acoustic memory gives relatively small improvements.\\nThe acoustic map gives relatively small improvements when used with clean audio. However, when used with noisy audio, our model with an acoustic map is much more robust than the other models, as can be seen in Fig. 4b. Specifically, when the noise level exceeds 30 dB, our model with the acoustic memory suffers a very minor decline in performance; however, without the acoustic memory we see the noise has a significant impact on the model.\", \"2\": \"Any failure examples?\\nYes, we provided some failure examples in the supplementary video (see starting from 5:25). Sometimes when the audio goal is just next to a wall or cornered between obstacles, the audio reflections could be strong, and the agent after reaching the goal quickly would oscillate around the goal trying to locate the exact location. We also saw some cases where the agent would issue a stop action prematurely just next to the goal. We expect the changes in audio intensity are less detectable in the immediate area around the goal where the audio is the loudest which may lead to this behavior. We have updated the paper to include an analysis of failure cases in `Navigation results' under Section 4.\"}",
"{\"title\": \"Response to reviewer 4\", \"comment\": \"We appreciate your positive feedback.\\n\\nDefinition of successful episode. \\nAn episode is successful if and only if the agent stops at the exact audio goal location on the grid, as mentioned in the definition of Success Rate in Section 6.7 in the supplementary. We have also included the criterion for success in 'Metrics' in Section 4 of the revised main paper.\"}",
"{\"title\": \"Response to reviewer 1\", \"comment\": \"We appreciate your helpful feedback.\", \"1\": \"Fig. 2 slightly unclear.\\nThank you. We intended to show the graph maintained by our planner module that is constructed based on the map as the agent moves (Section 3.4). However, to avoid possible confusion we updated Fig. 2.\", \"2\": \"Not clear if model uses RGB or depth image.\\nAs shown in Fig. 2, we use depth images for building the geometric map (via the projection operation). We do not use RGB as input. As noted in the first paragraph in Section 3.2, we use depth because it is more effective than RGB for building geometric maps (Chaplot et al., ICLR 2020). Fig. 1 is meant to illustrate the high-level idea in the paper. We will make sure it is clear.\", \"3\": \"Is audio directionality used in the acoustic memory?\\nThe model does not explicitly encode audio directionality in the acoustic memory. However, the gradient in audio intensity stored in the memory can indicate to the agent the direction during navigation (i.e., the goal is usually in the direction of increasing audio intensity).\", \"4\": \"What happens if the waypoint is not possible with the graph search?\\nIf the waypoint is not reachable using graph search, the agent takes a single random action and breaks the planning loop. We have added this point to Section 3.4 in the updated version of the paper.\", \"5\": \"How are unexplored regions treated for the graph search?\\nThe unexplored regions are considered as free space during planning, following Chaplot et al., ICLR 2020. We have updated Section 3.4 of the paper with this point.\", \"6\": \"Impactfulness of this setting beyond audio-visual navigation.\\nFirst, we believe that the AV-navigation task is impactful, in terms of both real-world applications and learning challenges for the agent to translate raw audio-visual sensing into intelligent navigation actions. Second, please note that, unlike our approach, one of the compared methods (Gan et al., ICRA 2020) does indeed boil down the problem to point-goal navigation after predicting the goal location from audio, and it substantially underperforms the proposed approach (see Section 4). This shows that isolating the audio as a final goal predictor is insufficient, and our model's joint learning from audio and visual throughout the entire navigation process to predict waypoints is important. Third, we agree that extensions to an audio-based semantic object navigation task could be interesting future work. We added a note about such applications to the Conclusion section.\", \"7\": \"Unclear or overstated contributions with respect to hierarchical policy learning.\\nAs we noted in Section 1, hierachical policies for navigation are not new (e.g., Chaplot et al., ICLR 2020; Stein et al., PMLR 2018; Bansal et al., CoRL 2019; Caley et al., IROS 2016). However, to our knowledge, learning to set useful subgoals in an end-to-end fashion for the navigation task is new. The novelty is learning audio-visual waypoints of auto-adaptive granularity to maximize performance. To our knowledge, this is a contribution that is orthogonal to the fact that we tackle audio-visual navigation, and can also be applied to other application settings (see the Conclusion section for examples). Our idea improves over manually-designed heuristic definitions of waypoints, as it allows the agent to be more or less conservative in waypoint selection as per the demands of the situation, as shown through our model's improved performance over the Frontier Waypoints and Supervised Waypoints baselines. Further, we couldn't find anywhere in the cited literature any claims about \\\"heuristics\\\" being better than \\\"unstructured hierarchy\\\". If we have still missed something, we would like to request the reviewer to point us to the specific paper.\", \"8\": \"Wording: multi-modal memory is ''benefical\\\" but not ''essential\\\".\\nFair enough, the acoustic map is beneficial for the case of clean audio. However, in the presence of audio noise, it does become essential (without it, our SPL drops 0.7 points for 20 dB noise level, see Fig. 4b). We think the results strongly justify its inclusion in the model.\", \"9\": \"i) Move ablation to the main body and report variance; ii) ablate for unheard sounds.\\nWe have updated the 'Ablations' paragraph in Section 4 of the paper with these changes and additional numbers. We report the standard deviation of each model with 5 test runs, each having a different random seed. The standard deviation is $\\\\leq 0.5$, which is smaller than most of the improvement gains.\"}",
"{\"title\": \"Meta response for all reviewers\", \"comment\": \"We thank all the reviewers for their valuable feedback. Overall, the reviewers have appreciated our model design, comprehensive experimentation and strong results, detailed ablation studies and analyses of the model's behavior. They have also suggested some changes and asked for some clarifications. We address them in this rebuttal and by making minor revisions to the paper (highlighted in blue).\"}",
"{\"title\": \"Thorough and useful paper, with some clarifications and restated contributions.\", \"review\": [\"This work presents an approach for audio-visual navigation, in which an agent receives both an RGBD observation of the world and an audio signal emitted from the goal. The proposed approach leverages a structured memory via an occupancy grid and an acoustic map. A learned hierarchical policy is used to set waypoints within the occupancy grid at a high level, with a low level search over the free occupancy grid. The approach is demonstrated over baselines to reach the goal at a high rate and to do so efficiently.\", \"The paper is well written and clear. The figures and videos are useful. The baselines and results are thorough and show clear benefit of the method and design choices. I appreciate both the comparison to state of the art methods for audio-visual and the baseline comparisons. A few clarifications that should be made:\", \"The right side of Fig. 2 is slightly unclear due to the graph, which on a quick look brings notions of techniques like Savinov 2018. As the graph is just used by the simulator, I\\u2019m not sure it makes sense to visualize in this way.\", \"The figures alternate between showing the observation as RGB and as depth. My understanding from text is that this uses RGB-D, but from figures like Fig. 2 it is not clear where the RGB is used. For Fig. 1 the depth is not shown (though from reading, I understand it to be projected into the occupancy map).\", \"Is directionality from the audio signal used at all within the acoustic memory?\", \"What happens if the waypoint is not possible with the graph search?\", \"How are unexplored regions treated for the graph search?\", \"The paper is somewhat limited by the impactfulness of the setting, audio-visual navigation. The authors make a clear case for uses of such a problem, but in general the setting appears somewhat manufactured. It boils down to a setting like point navigation but with a noisily observed goal with an uncertainty distribution based on audio. Another setting with this noisy goal is something like semantic or object navigation, e.g., https://arxiv.org/pdf/2007.14545.pdf, https://arxiv.org/pdf/2007.00643.pdf. Overall I believe approaches from this work may be applied in these settings and the paper could have significantly greater impact if these settings were considered. At a minimum, I believe the paper would benefit from a discussion of applications of ideas from this work beyond audio-visual navigation.\", \"My other concern is that at times the paper is unclear or overstates contributions. Such as stating:\", \"\\u201cThis is a novel technical contribution independent of the audio-visual setting, as it frees the agent to dynamically identify subgoals driven by the ultimate navigation goal.\\u201d and \\u201cThis is a new idea for 3D navigation subgoals in general, not specific to audio-visual\\u201d. Many of the cited navigation papers use a hierarchical approach as a baseline, with the \\u201cheuristics\\u201d they describe presented as benefits over this unstructured hierarchy. Furthermore, many pure HRL papers present results in a navigation setting.\", \"\\u201cWe show that the multi-modal memory is essential for the agent to produce good action sequences.\\u201d Based on the ablations, the multi-modal memory is \\u201cbeneficial\\u201d but not \\u201cessential\\u201d as the performance differences are somewhat small.\"], \"other_notes\": [\"The ablations should be moved into the main body of the paper though as they are quite important and they should include variance for each approach to really understand the significance of the choices. It would also be interesting to include a human baseline for navigation to put performance into context.\", \"The authors should ablate for unheard sounds. I expect the audio memory, which is purely based on intensity may perform well here.\", \"_____\"], \"post_author_rebuttal\": \"I appreciate the author\\u2019s response and overall the authors have addressed my concerns. I am thus raising my score. \\n\\nThe only point that I believe still stands is #7, though I should have updated earlier. My issue with claiming this as the first use of end-to-end learned subgoals in navigation is that there have been many recent works from goal-conditioned hierarchical RL that use end-to-end learned subgoals, e.g.,\", \"https\": \"//arxiv.org/pdf/1712.00948.pdf, https://arxiv.org/pdf/1805.08296.pdf, https://arxiv.org/pdf/1909.10618.pdf. Navigation to a known goal is a version of this problem and indeed in these works, the approaches are shown navigating between states. Others have applied end-to-end to navigation and manipulation, http://proceedings.mlr.press/v100/li20a/li20a.pdf. Overall, application of end-to-end HRL to the navigation problem is an interesting area to study, but to claim it as a major contribution I believe the paper should thoroughly examine the tradeoffs as applied to that problem, which I believe requires a detailed and standalone work.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Recommendation to Accept\", \"review\": \"This paper tackles the AudioGoal task of navigating to a acoustic source in a 3D environment. It introduces the idea of an acoustic memory, which maps and aggregates acoustic intensity over time. An agent\\u2019s acoustic memory, in tandem with its egocentric depth view, is then used to select navigation waypoints in an end-to-end manner. Their method beats SoTA in AudioGoal for two environments: Replica and Matterport3D.\\n\\nThe paper presents a simple end-to-end solution to waypoint selection that sits on top of an environment\\u2019s low-level controls. There\\u2019s a very nice symmetry between the occupancy map (used for waypoint selection) and the acoustic memory map \\u2014 backed, of course, by experimental results and convincing ablations.\\n\\nOverall, the paper is extremely well written. Namely, in the exposition of the AudioGoal task. This is coming from someone who (works on embodied language and) is aware of, but not deeply familiar with, the tasks.\\n\\nThe paper provides a comprehensive set of experiments, baselines, and ablations. I particularly like Figure 4, which demonstrates the efficacy of acoustic memory in the presence of microphone noise.\\n\\nFinally, most clarifying questions that I had are addressed in the Supplemental section \\u2014 not distracting from the main points of the paper.\\n\\n[Clarifying Questions]\\n\\nIn Section 3.5, the authors should define what a successful episode means in each respective environment (e.g., within 3 meters of the goal). This affects how SR and SPL are interpreted.\\n\\n[Post Rebuttal]\\n\\nThank you to the authors for addressing my question. The paper presents a simple and elegant approach to the AudioGoal task, backed by extensive experiments and good writing. I'd like to maintain my positive rating.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Official Blind Review #3\", \"review\": \"Summary:\\n\\nThe authors address the audio-visual navigation problem, which aims to find a sound source in a 3D environment using both audio and visual information. The key innovation of the paper is to learn to set audio-visual waypoints, which decomposes a final goal to useful subgoals. Acoustic memory is introduced to strengthen the auditory perception. Experiments are performed on 3D environments of Replica and Matterport3D using SoundSpaces audio.\", \"pros\": \"(1) A deep reinforcement learning approach for AudioGoal navigation with audio-visual waypoints is proposed. It learns to set useful subgoals and address the navigation in a hierarchical manner.\\n\\n(2) The experiments are thorough and can well validate the effectiveness of the proposed audio-visual waypoint-based approach. \\n\\n(3) The paper is easy to follow and the provided demo can nicely illustrate the problem and demonstrate the superiority of the proposed method.\", \"cons\": \"(1) Rather than only current audio, the authors propose to use the acoustic memory, which aggregates the audio intensity over time in a structured manner. Although the authors claim that acoustic memory can strengthen auditory perception, we only observe relatively small improvements (AV-WaN vs. AV-WaN w/o At) in Table 2.\\n\\n(2) Any failure examples? Please provide some failure results of the proposed audio-visual waypoint-based method and give an analysis in the main paper. Failure cases can help us to understand the drawbacks of the current subgoal based model. \\n\\n*** Post-Rebuttal ***\\n\\nThe authors addressed my concerns in the rebuttal. Overall, this is an interesting paper and extensive experiments are conducted. Thus, I would like to keep my positive rating.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting submission on audio-visual navigation. The role of waypoints can be better justified.\", \"review\": \"This paper studies the problem of navigating to the sound source in a virtual environment such as Replica. The main contribution is a new formulation that learns a policy on the next \\\"waypoint\\\" and uses the predicted waypoints as intermediate goals for path planning. The results are promising, much better than recently published baselines especially Gan et al and Chen et al.\\n\\nThe paper is complete, well-written, and the reference is thorough. The formulation is well-justified. The results look promising. The authors have also included some interesting analyses to better understand how the model works.\\n\\nOn the negative side, I'm still not fully convinced that this is a practically useful problem, or how challenging it could be if formulated in the right way: assuming a household robot has a room-centric representation once deployed, even simply walking through all rooms will let it quickly identify the audio source. In this case, it's unclear why we need such complicated policy learning algorithms. But I don't mean to reject this paper based on this philosophical argument. The authors don't have to respond to this point.\\n\\nWhat I do want to hear from the authors is why waypoints are useful. Though the authors have included some ablated models, there misses one baseline that employs exactly the current formulation but lets the policy learn to predict the next step (action) directly instead of to predict waypoints. Path planning is therefore no longer required. If the authors restrict the action space to be the space of rooms (i.e. actions are \\\"go to the room on the right\\\", instead of \\\"moving right by 1 foot\\\"), unless the agent believes it's already in the same room as the audio source, then such a policy learning method may work quite well, maybe even comparable with the waypoint-based method?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
-5VpoDCExrU | Log representation as an interface for log processing applications | [
"Mohammad Amin Sadeghi",
"Shameem Parambath",
"Ji Lucas",
"Youssef Meguebli",
"Maguette Toure",
"Fawaz Al Qahtani",
"Ting Yu",
"Sanjay Chawla"
] | Log files are files that record events, messages, or transactions. Logs are rich containers of data because they can store a sequence of structured textual and numerical data. Many sequential forms of data including natural languages and temporal signals can be represented as logs.
We propose to represent logs at a few levels of abstraction including field level, log level, and log sequence level. The representation for each level can be computed from the previous level. These representations are in vector format and serve as interfaces to downstream applications. We use a version of transformer networks to encode numerical information as well as textual information that is suitable for log embedding. We show how a number of log processing applications can be readily solved with our representation. | [
"Vector embedding",
"Logs",
"Search",
"Causal Analysis",
"Anomaly Detection"
] | Reject | https://openreview.net/pdf?id=-5VpoDCExrU | https://openreview.net/forum?id=-5VpoDCExrU | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"SK5G4FVNRKw",
"ywJpzELJo1d",
"Dn41cMiSlBa",
"2YxWpI_Kp-R",
"wF3P7UEfjK9",
"acg4qdMwZTg",
"Hj3372pBirk",
"_CMq-C2pNgH",
"TGFkZqmtEos"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040514063,
1606303700352,
1606296619270,
1606293151197,
1606290773721,
1604172366461,
1603883329205,
1603783715134,
1603704334794
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3404/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3404/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3404/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3404/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3404/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3404/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3404/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3404/AnonReviewer4"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"Most of the reviewers had concerns on the model being considered and there are additional concerns in the paper's discussion on the experimental results.\"}",
"{\"title\": \"Some comments in the review are not relevant. Some comments contradict verifiable facts. Please read the details.\", \"comment\": \"1- \\\"In section 4, the authors only described the proposed time encoding technique, while other parts of the log representation (such as encoding a log entry) were not described.\\\"\\n\\nThe reviewer is asking how transformers encode a log entries. Encoding of each log entry is learned by the Transformer network. This is a widely known characteristic of any end-to-end deep model including transformers and LSTM. Transformers learn and use a look up table for representations. Please refer to the paper \\\"Attention is all you need\\\". In the original submission we assumed our reviewers are aware of this. We add a note in the paper to clarify this.\\n\\n2- \\\"it is not clear if the proposed time encoding technique is better than the related methods (there are many related methods for encoding/representing time).\\\"\\n\\nThere are no related methods for log time encoding (!) so we could compare to. To the best of our knowledge, we presented the first work to encode \\u201ctimestamp\\u201d in logs and there is NO other work to encode time for logs. We made a fresh search and we did not find any relevant works. We reuse the same formulation offered by the original transformer paper to encode location and we argue that this formulation is relevant to time as well. The original transformer paper found that static encoding (similar to what we are using) performs better than a learned encoding. So we didn't feel the need to explore alternative time encoding techniques. One can explore possible time encoding techniques in a separate work.\\n\\n3- \\\"he evaluation of the proposed approach is very weak.\\\". We evaluated our model on two datasets, compared to four previous works and showed five downstream applications. OVER half of our paper is devoted to experiments! We also recently added a new experiment on BlueGene dataset and we outperformed a recent work.\\n\\nOne of the papers that the reviewer himself proposed, (the only relevant one among the three papers) evaluates only on HDFS! Yet, the reviewer believes the fact that we evaluated on HDFS and Radiostation data is not enough. Also please note that we added another experiment on BlueGene data where we beat previous works.\\n\\n4- \\u201dthe obtained results on HDFS were not very different from the results of the related work (DeepLog).\\u201d\\n\\nWe outperformed DeepLog on HDFS dataset by 0.978 to 0.96. We nearly \\u201cHALVED\\u201d the error and we think the community agrees that halving error on a widely studied benchmark is considered significant. \\n\\n5- \\u201cno experiments were conducted for the causal analysis task.\\u201d\\n\\nSection 5.5 is devoted to our experiments on causal analysis and it spans about 20% of the experiments section. We did not perform \\\"quantitative comparison\\\" and we devoted one paragraph to explain why. We explained that NO ground-truth and NO benchmark exists for causal analysis on logs. Therefore, there is no way anybody can perform quantitative evaluation. We established the superiority of our model on other experiments and resorted to qualitative experiments on causal analysis. Our aim was to show that causal analysis is a possible downstream applications.\\n\\n6- We cited the paper \\u201cRobust log-based anomaly detection on unstable log data.\\u201d investigates anomaly detection on synthetic unstable log data. This papers uses LSTM for anomaly detection. Our model outperforms this work on the standard HDFS dataset. The 99% figure they report is on a synthetic dataset that is not relevant to us.\\n\\n7- You asked to compare to the two following two papers. We carefully studied these papers and to the best of our judgement, we didn't find them relevant:\\n- Zhu et al., Learning to log: Helping developers make informed logging decisions, in Proc. ICSE 2015. pp. 415\\u2013425.\\n- P. He et al., \\u201cCharacterizing the natural language descriptions in software logging statements,\\u201d in Proc. ASE 2018, pp. 178\\u2013189.\\nThese two papers investigate logging practices by software developers. There is no machine learning contribution in these two papers. The problems that these two papers investigate are not relevant to our field.\\n\\nGiven the above rebuttals, we ask you to reconsider your rating.\"}",
"{\"title\": \"Three questions and four comments. Questions were answered and comments were addressed.\", \"comment\": \"Thank you for your comments and questions.\", \"1__novelty\": \"We agree with you where \\u201cThe topic of the paper is very interesting and relatively less studied in machine learning domain, though important for various applications.\\u201d This is why we realized that the community needs a paper to make a bridge between standard machine learning techniques and system logs. The novelty and the contribution of our paper is in making this connection. We found no tool better than transformer networks for processing logs. We think this paper helps people who work with logs make better use of machine learning techniques for their own applications.\", \"as_a_review_of_the_novelties_of_our_work_please_note_that\": \"- This is the first work to use transformer networks to embed sequences of logs. Prior work including Logsy and DeepLog either don't use transformers, or they only process single log entries.\\n- We are the first work to encode log timestamps to learn log representation and we also show empirically for the first time that timestamps help improve embeddings. We also give a clear picture how to encode time for the first time.\\n- Our paper is the first paper to review several downstream applications using a unified log embedding scheme which is important.\\n- We proposed these levels of abstraction (to simplify thought process and engineering process) for the first time.\\n\\n1-b- \\\"The specific format of time encoding in Section 4.1 is not well motivated.\\\" We added a paragraph to motivate the specific format of time encoding.\\n\\n2- \\u201cIs it reasonable to consider a log entry as a sequence of characters?\\u201d Our hierarchical representation provides the flexibility to do precisely that if it is warranted. In fact we have separately tried using a Fasttext like embedding (at character level) to create a clustering-based tokenizer. Furthermore, past experiments in NLP show that currently word level granularity often gives better results in practice. If in the future character level granularity outperforms word level, our methodology is compatible with that too.\\n\\n3- \\\"How to handling large number of arguments in logs\\\":\\n- At the parsing level: Our second level of abstraction is \\\"parsed logs\\\". The role of the log parser is exactly to extract arguments and handle grammar complexities. Log parsers parse structures as complex as JSON files and computer programs. We build our pipeline on top of log parsers so they can handle these parsing complexities.\\n- At the encoding level: We don't need to \\\"mine logs to different templates\\\". Please note that our third level of abstraction is \\\"field embedding\\\". A complex log entry could have many fields that are parsed as (field_name, value) pairs. We pointed out in Section 3-3 that DeepSet combines any number of fields.\\n\\n4- \\\"In Section 3, What is the granularity used for sequence embedding in logs?\\\" A sequence could be a whole log file, a block of logs or a sliding window. Our proposed method can handle sequence of log irrespective of what process generated those sequences. We discussed different configurations in the experiments section.\\n- \\\"How to determine the appropriate granularity?\\\" The appropriate granularity is dictated by the application. In some applications (like HDFS) we need to identify anomalous \\u201cblocks\\u201d, so the granularity is at the block level. In search, we are interested in sliding-windows. In log-entry level anomaly detection we need log-entry level granularity.\\n\\n5- \\u201cFir unsupervised anomaly detection, it is not clear why anomalous points would fall into a small cluster.\\u201d We did not argue that anomalous points would fall into a small cluster. Rather, we argued that because anomalies are rare, anomalous points are scattered around. So anomalies form several small clusters because they are naturally scattered and cannot fall into a single cluster. One can use any clustering technique such as Isolation Forest and it may outperform k-means. \\n\\nPlease note that our focus is not the choice of downstream clustering technique. Our focus is that we should represent logs in vector embeddings.\\n\\n6- \\u201cFor supervised anomaly detection, what is the % of anomalous logs used in the training?\\u201d, As noted in the paper, in the HDFS benchmark anomalies are attributed to \\u201cblocks\\u201d that contain multiple logs. In this benchmark 2.9% of blocks are anomalous. These details are the standard characteristics of HDFS benchmark and we used the same standards.\\n\\n7- \\u201cBeing an application-oriented paper, more importance should be given on hyperparameter setup and tuning for all the experiments.\\u201d Our paper is really a \\u201cmethodology paper\\u201d rather than an \\u201capplication paper\\u201d. Our goal is not to target a specific technical application, but rather we offer a methodology to learn, process and use log representations for machine learning applications. Since our focus is on methodology, we did not discuss hyperparameter details because they differ application to application.\\n\\nThanks\"}",
"{\"title\": \"We addressed all of your comments.\", \"comment\": \"Thanks for your helpful comments. We addressed all of your comments.\", \"1__motivation_for_time_encoding\": [\"We had a quick argument in Section 2 regarding why log timestamps are helpful. We didn't elaborate more because we thought it is obvious for the readers that time information is helpful. However, we agree that further elaboration on why time is useful, is helpful. Therefore, we added two paragraphs in the beginning of Section 4 to elaborate on why time encoding helps. Here are some intuitions on why time is helpful:\", \"An unexpected delay can signal anomaly or some underlying issue. Delays can only be seen using timestamps.\", \"Event logs are often intermittent. Practical systems often record several logs within a few seconds and are silent for a few more seconds. Without time, our model has no idea how logs should be grouped temporally. With time, the attention modules in the transformer network can implicitly relate event logs that are relevant.\", \"Concurrent threads/states: sometimes multiple threads write to a single log file, or multiple concurrent \\\"states\\\" get reflected in a single thread. Time helps the model disambiguate these threads/states.\", \"We believe there are more complex temporal patterns that humans don't understand. we leave them to be learned by the attention capabilities of the transformer model.\"], \"2__logsy\": \"Even though the scope of Logsy is different from our work, since you requested, we compared our model to Logsy and we outperformed it. We reflected the results in the paper.\", \"but_why_the_scope_of_logsy_is_different\": [\"Logsy only works on \\\"single log entry\\\" level of granularity (our abstraction level 4). Unlike our work, Logsy does not handle sequences of logs. Therefore, Logsy cannot work on our benchmarks such as HDFS or Radio Station where what matters is \\u201cthe sequence of logs\\u201d, rather than \\u201cindividual logs\\u201d. Therefore Logsy does not work on the benchmarks that our work and DeepLog are evaluated on. This is why Logsy didn't report on HDFS benchmark. Logsy uses Transformer Network to process single log entries, while we use transformer Networks on sequences of logs as well. So the scope of the two works is different. However, other works such as DeepLog, PCA, IM, and N-gram are applicable to our benchmarks and we have compared to them.\", \"There are a wide range of settings for anomaly detection. Logsy is a \\u201ctechnical paper\\u201d focusing on a specific setting for supervised anomaly detection. In contrast, our focus is not on a specific setting of anomaly detection. We are promoting a general \\u201cmethodology to process logs\\u201d and we address several different applications, including but not limited to more general supervised and unsupervised anomaly detection. So again, the problems that the two works are trying to solve are different.\", \"As we pointed out in Section 1.1 of our paper, the idea behind Logsy is to use logs from auxiliary datasets to improve the results on another dataset. Even though our work can be used as tool for such specific problem setting, this specific work has not been the focus of our work.\", \"Even though logsy was uploaded to arxiv.org five weeks before the ICLR deadline, we cited and discussed it in our original submission. Google scholar reports that only one other paper (a survey paper) has cited Logsy yet but we cited it because they also use transformer networks.\", \"After all, we beat Logsy on its own dataset of choice.\", \"3- Discussion. As you suggested, we added discussion for each experiment. We also added an extended discussion in the supplementary material (due to space limitation).\", \"4- We fixed the typos that you brought up.\", \"We think we addressed the three points you brought up. We elaborated on the motivation for time, we compared to Logsy and explained why its scope is different, and we added discussion on individual applications as you requested. Given that we addressed all the three weaknesses you brought up and there are no more weaknesses left, we ask you to please reconsider your rating.\", \"Thanks\"]}",
"{\"title\": \"We think this review is thoughtful and careful. Several valid issues brought up by this review and we addressed them.\", \"comment\": \"Thank you for your thoughtful and careful comments. We think it is important that you noted that the goal of this paper is to promote a log analysis methodology and pipeline rather than to compete on the accuracy of a specific application.\\n\\nCurrently the log processing community does not use standardized techniques to solve their problems. We believe the community needs a thought process to make a bridge between raw logs and machine learning applications. No standardized process is proposed before this work. By establishing a processing pipeline and a multi-layered \\\"representation\\\" interface, we help log processing community more easily and effectively use machine learning tools to solve their problems.\", \"your_comments\": [\"We agree that we needed more connection between the levels of abstraction. As you suggested, we elaborated on the relationships between the levels of abstraction.\", \"Regarding the release of the dataset: YES, we will release the communication dataset. Parts of this dataset are going to be anonymized.\", \"We fixed all of the typos you pointed out.\", \"Thanks\"]}",
"{\"title\": \"Log representation as an interface for log processing applications\", \"review\": \"This paper proposes a multi-level abstraction for representing logs that appear in a wide range of application domains and a Transformer-based model for embedding those representations in a vector space. The authors show that a variety of log processing tasks (anomaly detection, predictive analysis, search, etc.) can be implemented on top of this foundation along with empirical results for some of them based on two real-world log datasets.\", \"strong_points\": [\"The paper is well written. It nicely educates the reader about the prior state of the art, motivates how the contributions of the work fit in, and then presents those contributions in a systematic manner with empirical evidence where applicable.\", \"Log processing is a widespread application with increasingly large and diverse datasets and need for a range of automated data analysis tasks over them. The multi-level abstraction to provide a common representation for this domain as a whole is a neat idea that can help organize existing knowledge into a common framework as well as facilitating future innovation based on that framework.\", \"The paper shows how transformer networks can be extended with the notion of time, which can be useful for time-oriented data in general, including logs.\", \"The proposed models are implemented in practice as part of a transformer library in PyTorch and are applied to a variety of log processing tasks with good results.\"], \"weak_points\": [\"The levels of abstraction could have been connected more clearly with the transformer model and the log processing applications in the experimental part. The mapping between these parts is not as easy to understand as it could have been based on the current writing.\", \"It would be good to add a figure to Section 3 to illustrate the various abstraction levels and how they are related (e.g., in the form of a pipeline together with an example log entry representation for each).\"], \"additional_comments\": \"- Releasing the telecommunications dataset would be a good contribution to the research community.\\n- Section 4.1, \\\"Given log sequence <(l_1, t_1), ..\\\": Please make sure to use consistent terminology. \\\"log sequence\\\" is defined slightly differently in Section 3.\\n- Figure 4: In the caption, it looks like \\\"Middle\\\" and \\\"Right\\\" should be in the opposite order. Also, the colors in the third graph are not easily readable.\\n- Typos:\\nlearnt -> learned\\nanomaies -> anomalies\\nuseful applications -> useful (for?) applications\\noh HDFS dataset -> on HDFS dataset\\npytorch -> PyTorch\\nfigure x -> Figure x\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review for Log Representation as an Interface for Log Processing Applications\", \"review\": \"**Summary**\\nLogs are widely used in computer systems to record their events. The recorded logs can be applied to a wide variety of diagnostic applications such as anomaly detection, root cause analysis, and causal analysis. This paper proposes levels of abstraction for log representation. There are in total 5 levels of abstraction, which are log sequences, parsed logs, field embeddings, log embeddings, and sequence embeddings. Each of the aforementioned levels can be derived from its proceeding levels. The paper uses the transformer model to generate a log representation. Moreover, the authors propose a time encoding method and add the encoding time to the transformer to improve the representation ability of the proposed log representation method. In the evaluation section, the authors apply the generated log representation to various downstream applications to test the effectiveness of the proposed method.\\n\\n**Strengths**:\\n1. The idea of using different levels of abstraction to generate log embedding is interesting.\\n2. The paper conducts a thorough experiment over various downstream tasks based on the learned log representation.\\n\\n**Weaknesses**:\\n1. The motivation for adding time encoding to the transformer is not clear. It would be better if the intuition behind this variation could be discussed rather than simply using the result of the ablation study.\\n2. It seems that the paper just applies the Transformer model to the log representation task and there is no comparison between the proposed method and other log representation methods in the evaluation part. In the related work section, the authors mentioned a recently proposed log embedding work (Logsy). It would be better if there is a comparison between the two different methods in the evaluation section.\\n3. There is little discussion about the results of the experiments. It would be better if some discussion of why the use of the proposed representation can or cannot improve the performance could be mentioned at the end of each experiment.\\n\\n**Minor Weaknesses**:\\nThe presentation of the paper could be improved. \\n1. The figures for the experiment results are too far away from the section that describes and discusses the experiment.\\n2. The \\u201coh\\u201d in the caption of figure 3 should be \\u201con\\u201d.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"The paper addresses an interesting application, but lacks novelty in the approach\", \"review\": \"The paper deals with log data representation and analysis. Logs are important to understand the status of a system and do various root cause analysis. This paper proposes a transformer based approach to obtain vector representation of log data, in various levels such as for key-value pair with a log entry, a log entry within a sequence of logs and various blocks of log sequences. The usefulness of log embeddings are shown on multiple log downstream tasks.\\n\\nThe topic of the paper is very interesting and relatively less studied in machine learning domain, though important for various applications. However, it has several drawbacks, as follows. \\n\\n1. The technical contribution of the paper is very limited. The transformer based approach is not novel. The specific format of time encoding in Section 4.1 is not well motivated.\\n\\n2. In Section 3, is it reasonable to consider a log entry as a sequence of characters, instead of sequence of words? Most of the log entries consist of a set of key words (presented in a readable format) and some arguments (can be numeric).\\n\\n3. One key difficulty in handling log data is the presence of large number of arguments (or parameters) within each log entry. Thus, it becomes difficult to mine logs to different templates. The exact parameter values can be very different in different logs. So, applying text modeling approaches directly on log data may not be the best approach.\\n\\n4. In Section 3, What is the granularity used for sequence embedding in logs? Is it for a whole log file, or a block of logs? How to determine the appropriate granularity for some downstream application?\\n\\n5. Fir unsupervised anomaly detection, it is not clear why anomalous points would fall into a small cluster. Rather, they would probably be distributed over multiple clusters, but still will be far away from the respective cluster center. The paper could have also used some standard anomaly detection algorithm such as Isolation Forest once the vector embeddings are generated.\\n\\n6. For supervised anomaly detection, what is the % of anomalous logs used in the training?\\n\\n7. Being an application-oriented paper, more importance should be given on hyperparameter setup and tuning for all the experiments. The lack of availability of source code also makes the reproducibility of the results difficult.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Log Representation as An Interface for Log Processing Applications\", \"review\": \"This paper proposes to represent system logs at five levels of abstraction (including log sequence, parsed log, field embedding, log embedding, and sequence embedding). The representation at each level can be computed from the previous level. Transformer Network is utilized for time encoding. The paper also describes various log-related applications based on the proposed log representation. Some experiments were conducted to evaluate the proposed approach.\\n\\nLogs are useful for understanding and diagnosing software intensive systems. It is good to see that this paper proposes a new neural representation of log data. The authors also suggested various applications of the proposed log representation.\\n\\nIn section 4, the authors only described the proposed time encoding technique, while other parts of the log representation (such as encoding a log entry) were not described. Also, it is not clear if the proposed time encoding technique is better than the related methods (there are many related methods for encoding/representing time). \\n\\nThe evaluation of the proposed approach is very weak. In section 5, the authors mentioned the Radio datasets. However, the use of Radio dataset is not described. The authors only evaluated the anomaly detection model on the HDFS log dataset, which is not enough. Also, the obtained results on HDFS were not very different from the results of the related work (DeepLog). Furthermore, no experiments were conducted for the causal analysis task. Therefore, the effectiveness and generalizability of the proposed approach are not clear. \\n\\nThe paper only compared with a few related methods for log-related tasks. Actually, this area has been widely studied and there are a lot more research work (some also utilized deep learning and language models). The authors could discuss and compare with them. Just a few examples:\\nZhang et al., Robust log-based anomaly detection on unstable log data. In Proc. ESEC/FSE 2019, 807-817.\\nZhu et al., Learning to log: Helping developers make informed logging decisions, in Proc. ICSE 2015. pp. 415\\u2013425.\\nP. He et al., \\u201cCharacterizing the natural language descriptions in software logging statements,\\u201d in Proc. ASE 2018, pp. 178\\u2013189.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
PI_CwQparl_ | Image Modeling with Deep Convolutional Gaussian Mixture Models | [
"Alexander Gepperth",
"Benedikt Pfülb"
] | In this conceptual work, we present DCGMM, a deep hierarchical Gaussian Mixture Model (GMM) that is particularly suited for describing and generating images.
Vanilla (i.e., "flat") GMMs require a very large number of components to well describe images, leading to long training times and memory issues.
DCGMMs avoid this by a stacked architecture of multiple GMM layers, linked by convolution and pooling operations.
This allows to exploit the compositionality of images in a similar way as deep CNNs do.
This sets them apart from vanilla GMMs which are trained by EM, requiring a prior k-means initialization which is infeasible in a layered structure.
For generating sharp images with DCGMM, we introduce a new gradient-based technique for sampling through non-invertible operations like convolution and pooling.
Based on the MNIST and FashionMNIST datasets, we validate the DCGMM model by demonstrating its superiority over "flat" GMMs for clustering, sampling and outlier detection.
We additionally demonstrate the applicability of DCGMM to variant generation, in-painting and class-conditional sampling. | [
"Gaussian Mixture Model",
"Deep Learning",
"Unsupervised Representation Learning",
"Sampling"
] | Reject | https://openreview.net/pdf?id=PI_CwQparl_ | https://openreview.net/forum?id=PI_CwQparl_ | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"LfKRgtxOZx",
"A8gO9I_VYPj",
"ErBHuHwEaE",
"FjCNg1L6Tr",
"h_utj1jmjWz"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040514130,
1604146851188,
1603911532242,
1603739038813,
1603294629357
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3403/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3403/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3403/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3403/AnonReviewer4"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This is a clear reject. None of the reviewers supports publication of this work. The concerns of the reviewers are largely valid.\"}",
"{\"title\": \"Review\", \"review\": \"==== Summary ====\\n\\nThe paper proposes a model that combines hierarchical Gaussian Mixture Models with a convolutional architecture, supporting both estimation and sampling. The model is trained end-to-end via SGD and is composed of 3 types of layers: standard convolutional and max-pooling layers and a newly proposed GMM layer. The latter operates by modeling the stack of channels at each spatial location as vectors sampled from a GMM. The outputs of the layer are the component probabilities at each location, followed by channel-wise normalization. The loss function is the average log-likelihood of every location at every GMM layer. The paper argues for using it as an alternative to other, less interpretable, probabilistic models of images and demonstrates its capacity to model the MNIST and FashionMNIST datasets.\\n\\n==== Detailed Review ====\", \"main_strengths\": [\"A novel architecture inspired by both ConvNets and hierarchical GMMs, allowing for more interpretable representation of images.\", \"Demonstrates that the model can handle simple image datasets and provides the code to reproduce the results.\"], \"main_weaknesses\": \"* There are no obvious theoretical advantages over other probabilistic models.\\n* Experiments do not compare to other methods beyond GMM, so it is hard to determine if there are any significant practical benefits. Additionally, the experiments are limited to only MNIST and FashionMNIST, which raises the question of whether this method is applicable to more complex datasets.\\n* The model does not represent a proper probability distribution. There is also a mismatch between the training objective and the sampling process.\\n* Missing references to other relevant probabilistic models of images that combine ConvNets and GMM.\\n\\nI do not recommend acceptance due to the lack of theoretical or practical benefit of the proposed method and the lack of appropriate comparisons to prior approaches. In more detail:\\n1. The paper does not argue for any advantage to the proposed method over the alternatives beyond a general claim of interpretability. The experiments merely demonstrate that the method can model very simple image datasets and has a basic ability to detect outliers. Many models can accomplish the same, and yet they are not compared. The authors should explain why someone would prefer using this model over the alternatives (GAN, VAE, autoregressive models like PixelCNN, or even proper hierarchical graphical models).\\n2. The model itself is not a proper distribution, as opposed to GAN, VAE, and autoregressive models, which do represent distributions. There is a lack of theoretical justification for the proposed loss function and its connection to the generative process (I believe you might be able to show your objective is a lower bound on the true log-likelihood). Regardless, if the only measure of success is image representation, then there are other non-probabilistic methods the authors could have compared to (e.g., plain AE or Generative Latent Optimization). \\n3. It is claimed to be the first method to combine ConvNets and GMMs in an end-to-end manner, but prior works have already done this, though using different constructions. Specifically, Sum-Product Networks with Gaussian leaves [1,2,3,4] have been trained with convolutional architectures [2, 4] and SGD end-to-end [2, 3, 4] on several image datasets, including MNIST, FashionMNISt. These models are proper distributions, equally interpretable, and their samples are comparable to those produced in this paper. They also do not scale well to more complex image datasets (to the best of my knowledge), which is why it is so essential to show experiments on other datasets beyond these basic ones.\\n\\n[1] Sum-Product Networks: A New Deep Architecture. Pool et al., 2012.\\n[2] Tensorial Mixture Models. Sharir et al., 2016.\\n[3] Deep Convolutional Sum-Product Networks. Butz et al., 2019.\\n[4] Random Sum-Product Networks: A Simple and Effective Approach to Probabilistic Deep Learning. Peharz et al., 2020.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"An interesting way to stack GMMs as density estimators in a convolutional architecture\", \"review\": \"## Summary\\nThe authors propose a convolutional neural network architecture where some layers realize a Gaussian mixture model (GMM) that performs density estimation over the embeddings of the previous layers it receives as inputs. \\nThese embeddings collect probability values. \\nAs such, their whole architecture, DCGMM, can be viewed as stacking density estimators, each of which is fit by maximum likelihood over the output embeddings of the preceding ones, i.e. it estimates a density on a series of latent spaces.\\nAs such, when applied to images, DCGMM does not provide an explicit likelihood over the pixel space -- or better only its first layer, comprising a shallow GMM, does.\\nThe authors use DCGMM as a simulator, i.e., to sample images and devise a sharpening scheme that tries to work around the non-invertibility of pooling layers (but also of the GMM layer?)\\n\\n## Presentation\\nThe paper is overall well-written and readable.\\nThe biggest concern I have regarding presentation is that the proposed DCGMM is not a density estimator per se nor it is used for desity estimation in the experiments, while it is presented as such.\\nI would suggest authors a rewriting that includes a deep discussion that also properly contrasts DCGMM with other hierarchical models listed in Section 1.2. It seems to me that the latter define a proper density $p(X)$ over the observables $X$ while DCGMM leaves the task to the shallow GMM of its first layer.\\nIn particular, the way to evaluate (and then train) the model, can be named 'forward evaluation' mode more than density estimation.\\n\\n## Contributions\\nThe main contribution, i.e., to perform stacked density estimation, seems novel to me.\\nHowever, I would advice authors to directly present it in this way and not as a normal density estimator over oversables.\\nThis implies properly contrasting DCGMM w.r.t. the existing literature (see also comments above).\\nDeep (Gaussian) mixture models have been research in the literature [1,2,3]\\nDIfferently from DCGMM, they retain an explicit and tractable likelihoods (but also marginalization and conditioning) as well as the ability to exactly sampling from them.\\nThese deep mixtures also show how it is possible to train a discrete latent variable model with thousands of latent variables effectively with EM [4]\\n\\nI feel that the proposed sampling scheme needs more space for discussion and clarifications.\\nFirst, it is not clear to me if the proposed sampling scheme is consistently and correctly sampling from the space $X, Z_1, Z_2, ..., Z_D$ where the latter are the $D$ latent variable spaces associated to the GMM layers in DCGMM.\\nSecond, I wonder if the GMM layer is invertible per se: two different incoming inputs can get associated the same likelihood by a single Gaussian due to the symmetry of its density. The same might happen when the GMM retains some symmetries as well.\\n\\nConcerning the usage of DCGMM as an outlier detector, it is not clear why only the likelihoods on the top-most latent space ($Z_D$) are employed. I wonder what happens when all the layers are utilized (singularly or collectively).\\n\\n\\n## Experiments\\nThe experiments focus on using DCGMM as a simulator and as an outlier detector (or clustering) for MNIST and FashionMNIST.\\nThese count as interestnig preliminary results but go against the original motivation that DCGMM overcomes the limitations of other deep mixture models that are not able to scale to larger datasets.\\nFurthermore, it would be insightful to compare DCGMM -- even on MNIST and FashionMNIST alone -- against the deep mixture models referenced in Section 1.2. \\n\\nFor the sampling experiment, the effect of sharpening and/ top-S sampling could be measured in a more rigorous way.\\nFor instance, the quality of samples can be assesed via FID scores or any other analogous metric from the GAN/VAE literature.\\nMoreover, are 'duplicates' -- possibly signaling mode collapse -- exact replicas or slight variations? This can be further inspected by reporting for each generated sample its top-k nearest neighbor samples in the training set and some metric (even in pixel-space) of divergence between them.\\n\\nWhy are authors training only on classes \\\"0-4\\\"? Is there an issue in scalability?\\n\\n\\n## References\\n\\n[1] Sharir, Or, et al. \\\"Tensorial mixture models.\\\" AISTATS (2018).\\n[2] Jaini, Priyank, Pascal Poupart, and Yaoliang Yu. \\\"Deep homogeneous mixture models: representation, separation, and approximation.\\\" Advances in Neural Information Processing Systems. 2018.\\n[3] Butz, Cory J., et al. \\\"Deep convolutional sum-product networks.\\\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 33. 2019.\\n[4] Peharz, Robert, et al. \\\"Einsum Networks: Fast and Scalable Learning of Tractable Probabilistic Circuits.\\\" ICML (2020).\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Needs comparison to other models and probabilistic interpretation seems unsure\", \"review\": \"In this manuscript the authors present a variant of stacked Gaussian mixture models they propose for modeling images called Deep Convolutional Gaussian Mixture Model. This model may contain analogues of convolutional layers and nonlinearities between the stacked Gaussian mixture models. This model can then be trained using stochastic gradient decent on the gradients propagated through the model. Finally the authors show some experimental evaluation on FashionMNIST and MNIST.\\n\\nOverall I vote for rejection. While the model the authors present seems to work in principle on images I do not think the authors present a good argument why their model should be used for modeling images and there are definitely other models the authors should compare their model to. Also I have doubts whether the model as presented is a proper probabilistic model.\", \"pros\": \"1) This is a new Gaussian mixture based model.\\n2) It is stackable and inherits some of the benefits of DNNs.\\n3) It is trainable with (stochastic) gradient descent\", \"cons\": \"1) I disagree with the authors in the introduction. While GANs as they discuss are not fully probabilistic models and thus have limitations to their applicability, other models do have very clear probabilistic interpretations and apply to all the tasks discussed here. Examples of such networks are the numerous variations of the variational autoencoder, the invertable network based variations like FLOW or GLOW and diverse others. As these are ignored by the authors, I don\\u2019t think they place their work well into the literature and do not see a particularly strong argument here that gaussian mixture models would be a great addition to the modeling of images.\\n\\n2) The authors do not compare their method against any competing methods outside the Gaussian mixture model framework. I think they would have to present some comparisons to state of the art methods for the tasks they test their model on. At very least compare to some basic models which generate some intuition where Gaussian mixture models overall lie in terms of performance. Without that I am completely lost whether the performance in these tasks is any good.\\n\\n3) If the authors instead want to focus on the conceptual level or advancing our understanding of the presented kind of model I still think a lot more could and should be done: For example, the authors decide to perform outlier detection based solely on the last GMM layer. While this might be somewhat sensible for the decision between different numbers or categories in MNIST, in general, this is not the probability of the observed data under the model. Why is this used? Similarly: What is the structure of the representations produced by the model?\\n\\n4) Given that the main claimed advantage of the model is its probabilistic interpretation, I find the probabilistic description and analysis of the model somewhat lacking:\\n - How exactly is the probability of a given sample computed in this model?\\n - How can convolution and pooling layers be interpreted as probabilistically given that they are not invertible. It seems to me, that for both types of layers, the input may even have zero probability to be produced by the described sampling processes. I.e. either some inputs have 0 probability under the model or the sampling methods do not actually sample from the model.\\n - just ignoring pooling and convolution steps in the calculation of the probabilities under the model as I guess the authors do here seems wrong.\\n\\n5) Each Gaussian mixture model layer in the proposed model converts a continuous input space into a probability distribution over Group assignments. These assignment probabilities are then linearly mapped and pooled before another Gaussian mixture model layer again interprets the input as point in a R^n to be modeled by a mixture of Gaussians. This is technically possible to some degree if we ignore the restrictions on the support to achieve a valid distribution, but I do not get the intuition how this may well represent the composition of an image. It is stated as fact that the presented model is good for that, but I think this part requires some justification. Wouldn\\u2019t we expect operations which further work on distributions over discrete spaces instead?\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Heuristics shouldn't be considered a probabilistic model\", \"review\": \"Summary\\n-------------\\nThis paper defines encoding and decoding procedures which use transformations inspired by Gaussian mixture models (GMMs). The decoding procedure further involves \\\"sharpening\\\" steps. A heuristic for training the parameters shared by the encoder and decoder is proposed which optimizes the likelihoods of GMMs defined on various outputs of the encoder. The decoder is evaluated in terms of clustering performance, sample quality, and outlier detection.\\n\\nQuality (1/5)\\n-----------------\\nIt seems like a stretch to call the proposed model a \\\"deep Gaussian mixture model\\\" or a probabilistic model at all. For a probabilistic model we should be able to assign a probability to any (measurable) set of the input space. (For some models such as GANs this probability is intractable but it is still easily defined.) However, it is not clear from the model's description (Section 3) what this probability should be. This lack of a well defined density (or measure) is surprising given the authors' emphasis on \\\"density estimation\\\" as a \\\"main objective\\\" of probabilistic image modeling and a potential application of their model.\\n\\nOne could view the entire decoding process as a complicated generative model which involves an iterative sharpening procedure, but this is not how the model is presented. In particular, the training procedure does not seem to be optimizing any divergence of this model and it is not clear how the encoder relates to the posterior of this model.\\n\\nThe output of the GMM layer (\\\"responsibilities\\\") live on a simplex (Eq. 2). If we stack two GMM layers, doesn't the likelihood of the second GMM explode (since the differential entropy of the inputs is negative infinity)? Is suspect the reason that the training loss doesn't explode may be an artefact of SGD and/or the pooling layers.\\n\\nClarity (3/5)\\n----------------\\nI appreciated that the encoding and decoding procedure as well as the training objective were clearly described.\\n\\nOn the other hand, already in the abstract and the first two paragraphs the authors make confusing claims such as the following:\\n\\n(1)\\u00a0The authors claim in the abstract that \\\"DCGMMs can be trained end-to-end by SGD\\\" and that this \\\"sets them apart from vanilla GMMs which are trained by EM, requiring a prior k-means initialization\\\". But vanilla GMMs may very well be trained with SGD as the authors note themselves in the related work section. While k-means may speed up training, it is not \\\"required\\\" by GMMs.\\n\\n(2)\\u00a0The authors claim that since \\\"images usually do not precisely follow a GMM distribution [...] clustering is not a main objective [of image modeling].\\\" This seems like a non-sequitur.\\n\\n(3) \\\"An issue with GANs is that their probabilistic interpretation remains unclear. This is outlined by the fact that there is no easy-to-compute probabilistic measure of the current fit-to-data that is optimized by GAN training.\\\" I would argue that the divergence(s) (approximately) optimized by GANs as well as their probabilistic interpretation are much better understood than the proposed model, as discussed above.\\n\\nThe authors point out that \\\"training GMMs by SGD is challenging\\\" because the covariance matrices are constraint to be positive definite. Isn't reparametrization relatively easy (C = AA')? And don't you have the same issue in your model (Eq. 2)? How do you enforce positive definiteness in your model?\\n\\nOriginality (2/5)\\n---------------------\\nI would have expected a more thorough comparison with deep GMMs (van den Oord & Schrauwen, 2014) which appears to be the most closely related model.\\n\\nAnother line of research not being discussed are autoregressive Gaussian mixture models (e.g., Domke, 2008; Hosseini et al., 2011; Theis et al., 2012). These models generalize Gaussian mixture models and are able to efficiently model images of arbitrary size by (like convolutions) making a stationarity assumption. Deep extensions exist as well (Theis & Bethge, 2015).\\n\\nSignificance (1/5)\\n-----------------------\\nA lack of conceptual insights or a principled motivation would be fine if the empirical results made up for it. Unfortunately the empirical evaluation seems rather limited as well. No comparisons were made to previously published baselines. Instead, all results are only compared to other DCGMM results provided by the authors.\\n\\nThe chosen tasks and datasets (MNIST, FashionMNIST) are rather limited as well. In particular, unconditional image generation is not a well defined task (and certainly not a \\\"main objective\\\" of image modeling) but rather a (poor) proxy for evaluating generative models (see Theis et al., 2016).\", \"rating\": \"2: Strong rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
gZ2qq0oPvJR | Towards Finding Longer Proofs | [
"Zsolt Zombori",
"Adrián Csiszárik",
"Henryk Michalewski",
"Cezary Kaliszyk",
"Josef Urban"
] | We present a reinforcement learning (RL) based guidance system for automated theorem proving geared towards Finding Longer Proofs (FLoP). FLoP is a step towards learning to reason by analogy, reducing the dependence on large scale search in automated theorem provers. We use several simple, structured datasets with very long proofs to show that FLoP can successfully generalise a single training proof to a large class of related problems, implementing a simple form of analogical reasoning. On these benchmarks, FLoP is competitive with strong theorem provers despite using very limited search. | [
"automated reasoning",
"reinforcement learning",
"reasoning by analogy"
] | Reject | https://openreview.net/pdf?id=gZ2qq0oPvJR | https://openreview.net/forum?id=gZ2qq0oPvJR | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"F-DUI8zPt8z",
"8H95pVSK374",
"E9wgfs4Ob1",
"KOVyCWv_6S",
"iMAJkGaQLFZ",
"zkOmMsl9OWQ",
"OPg-RfIxua",
"n-HX-c6gUm"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040514200,
1605626121663,
1605625987733,
1605625478251,
1605625296445,
1604356917510,
1603915187179,
1603747259686
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3402/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3402/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3402/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3402/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3402/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3402/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3402/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This paper describes an application of reinforcement learning to theorem proving in the connection tableau calculus. The paper does a reasonable job in the application of RL techniques and the high level issues are important. However, as the reviewers note, there is little connection to the notion of \\\"analogy\\\" outside of the very general idea that RL methods learn to generalize to novel situations.\\n\\nI did not find the methods very original as it seems a somewhat mechanical application of RL methods. That would be fine if the empirical results were convincing or surprising. However, I found the Robinson arithmetic domains not very interesting as the problems were literally arithmetic, as in 2+5 = 7, rather than theorems such as the commutativity of addition. The empirical results were not as convincing in the TPTP domains where MCTS seemed to dominate.\\n\\nAlso there are related papers in the area of deep learning applied to theorem proving that I believe dominate this paper (\\\"learning to reason in large theories\\\" and \\\"an inequality benchmark\\\".\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"Dear Reviewer #3,\\n\\nThank you for your evaluation. With respect to connection with analogy, please see the joint response to all the reviews.\\n\\nA very natural continuation of our work is your proposed formalisation of proof by analogy in which proofs are considered as proper entities and we learn their features. A model that outputs an entire proof - or maybe first part of a proof - in a single step could be a logical next step, which is the subject of a separate experiment we are currently working on. In the paper, we hope to have shown that we can train models that achieve the kind of confidence in internalising an entire proof pattern required for such systems.\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"Dear Reviewer #2,\\n\\nThank you for your evaluation. With respect to connection with analogy, please see the joint response to all the reviews.\\n\\n#### Why the learned policy wouldn't be subject to the the exponential blowup?\\n\\nSearch and exponential blowup is likely not avoidable in theorem proving in general. However, mathematics is full of important classes of problems where a deterministic algorithm is achievable and our aim was to demonstrate that FLoP can discover and internalise such algorithms. The models trained on the arithmetic datasets are confident enough to be evaluated without search/backtracking.\\n\\n#### How sensitive is the system to the training problems from which it forms analogies?\\n\\nTraining problems for Stage 1 and 2 were selected to be simple enough to be solvable by a random policy, while still capturing the complexity of the problem class. Many alternative training problems work just as well. Stage 3 is significantly more complicated. There, a problem can have many different proofs that are very different in terms of how well one can generalize from them. Appendix I gives some examples when a shorter proof can be misleading, because it exploits a shortcut that is not always applicable. We found that the harder the training problems, the less likely they are solvable using such shortcuts, hence the more valuable training signal they provide. Our experiments with easier training problems (~shorter than 20 steps to prove) often resulted in massive overfitting even when using hundreds of problems. Hence, we selected training problems to be hard enough to resist overfitting to some shortcut. Note that problems in Stage 3 are versatile enough so that our training problems do not cover all the proof patterns in the test set. Many alternative training problems of the same complexity (over 100 steps to prove) can be considered and we conjecture that we would get better results if we trained on more such problems.\\n\\n#### How much do you depend on the particular feature space?\\n\\nWe decided to use a feature representation that was successfully applied in several previous works in theorem proving, without modification to make our system better comparable. The features provide a fast baseline that were not tuned for our datasets - we used them as provided by the independently developed fCoP kernel. Learned feature extraction of logical objects is a separate line of research and we agree that a natural improvement on FLoP can be to incorporate such methods. Note, however, that neural formula embedding entails further slowdown of the guidance, making it hard to remain competitive in real time. Several experiments have shown the challenge to improve upon the features used in FLoP by neural embeddings (e.g. https://link.springer.com/chapter/10.1007/978-3-030-29436-6_12 and https://arxiv.org/abs/1911.02065v3).\\n\\n#### Minor comments\\n\\nYou are right about Figures 2/3, we use kernel density estimation. We will state this more explicitly in the paper. We are grateful for all the other comments and will update the paper accordingly.\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"Dear Reviewer #1,\\n\\nThank you for your evaluation. With respect to connection with analogy, please see the joint response to all the reviews.\\n\\n#### Novelty of the proposed approach\\n\\nWe use fairly standard RL techniques, some of which have not yet been applied to theorem proving. In particular, we use continuous, online learning from rollouts coming directly from the policy. To the best of our knowledge, learning from rollouts has not been applied to theorem proving before the first release of FLoP. Since then, there is at least one other paper using policy gradient for theorem proving: https://arxiv.org/abs/1911.02065v3. This learning setup, coupled with curriculum learning, allows us to train on some rather long proofs in Stage 3, with much less overfitting than in supervised learning. We argue that the selected techniques are a good choice if one wants to explore the space around a particular problem (or a particular proof) to such an extent so that it generalises to problems with extreme lengths.\"}",
"{\"title\": \"Response to all three reviewers\", \"comment\": \"Dear Reviewers,\\n\\nWe are greatful for your thorough evaluation. We would like to use this space to answer issues raised by all of you and then respond to individual comments separately.\\n\\n#### Connection with analogy\\n\\nIn our paper, we have interpreted analogical reasoning as building a model that internalizes a proof, and then successfully applies it to a class of related problems, without relying much on search. The trained model is supposed to \\\"know\\\" the proof of an unseen, yet familiar problem. This is a highly simplified approach which does not capture the full potential of analogy, but we argue that it is a meaningful start that will hopefully lead to more refined solutions. In particular, our model does not yet manipulate on the level of abstraction of proof objects, rather that of a sequence of proof steps. Note, however, that this is also true of previous works on theorem proving by analogy, trying to establish direct matchings between steps in one proof and steps in another, using all sorts of methods/heuristics. Machine learning can make this proof step matching process automatic and allows to replace one-to-one mappings with many-to-many mappings. To some extent, this is true of most works using ML for ATP. Our paper has tried to push this direction further, making analogy and elimination of search more explicit. Prior and parallel works have built similar guiding policies, but they have not demonstrated similarly successful internalisation of a proof method. We will make our interpretation of analogy more explicit in the paper.\"}",
"{\"title\": \"How does FLoP relate to reasoning by analogy?\", \"review\": \"This paper proposes a theorem prover based on Proximal Policy Optimization for the connection tableau calculus. This prover is applied on five domain-specific datasets, where theorems are relatively simple but their proofs are long and repetitive. The proposed theorem prover could achieve competitive performance with strong baseline provers, yet requires much few searches.\\n\\n\\nI think the assumption of this paper is correct that we need more accurate heuristics but not more searches to find longer proofs. The main issue is that it is unclear what is the novelty of the proposed approach. The approach section of the main paper is quite short, just one paragraph (section 3) and one algorithm. It seems that the main approach is to train the theorem prover by reinforcement learning following a specific learning curriculum. It is not mentioned that why the proposed approach has advantages to finding longer proofs, and how the proposed approach is related to reasoning by analogy. Currently, the reasoning by analogy approach mentioned in section 2 seems irrelevant to the proposed approach, except that they may share the same effect and target to reduce the number of searches within the prover. \\n\\nI think the proposed approach shares the same form of heuristic as the prior work on neural theorem proving, that building a reactive policy to select the next action based on the current proof state. I can not see how this is related to reasoning by analogy, which is described as \\\"Reasoning by analogy involves observing the proof of one problem, extracting the core idea, and\\nsuccessfully applying it to another\\\". All should we consider all neural-based provers as reasoning by analogy, since they are trained with existing proofs? \\n\\n=================================================\\nAfter reading the responses from the authors and other review comments, I maintain my previous rating of this paper. I am not convinced that the proposed approach is a simple form of analogy reasoning. Trying to build a relationship between the proposed approach and analogy reasoning is uninformative and misleading.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Impressively scalable search with fairly standard RL, possibly tenuous connection to analogy\", \"review\": \"Summary:\\nThis work introduces a method for learning to prove theorems which can leverage prior proving experience in order to discover very long proofs. At its core it works by inputting a corpus of training problems (which can also be annotated with solutions i.e. proofs), training a policy to solve these training problems by curriculum learning. The curriculum works by first supervising on the trace of an entire solution, and then once the system can solve a particular problem, decreasing the amount of the trace that it supervises on. The authors claim that this is a kind of analogical reasoning, because the system's policy is implicitly learning to represent the state/action space on the basis of prior experience.\\n\\nI'm not convinced that this looks very much like the analogical reasoning advertised in the introduction to the paper , which claims \\\"Reasoning by analogy involves observing the proof of one problem, extracting the core idea, andsuccessfully applying it to another\\\". The system does not work by fetching previous proofs and massaging them into a proof for a new problem, except implicitly through the prior knowledge represented in the weights of the policy. My only real objection to this work is that it doesn't function as advertised. In particular I'm not sure why the learned policy wouldn't be subject to the \\\"the exponential blowup [which causes] the search... to fail beyond a certain depth,\\\" which afflicts prior methods.\\n\\nIMO this paper should be accepted iff this approach is a new advance for automated theorem proving, but should not be accepted on the basis of it's connection to reasoning by analogy. However, it is possible that I have misunderstood how the algorithm works, and would like to be corrected by the authors if indeed the algorithm works as advertised. Because I am not well steeped in the automated theorem proving literature, I do not feel qualified to make the call on how big of an advance this is for that community. However, it does not seem like a large advance for deep learning or for combinatorial search.\", \"pros\": [\"seems to scale to some extraordinarily long proofs! For example, their system can discover proofs with steps on the order of 10^3-10^4. Given the combinatorial explosion in the search space (scales exponentially with proof length) this is quite impressive.\", \"reasoning by analogy is crucial yet underexplored. It should be especially important for few-shot learning of proof strategies, and I'm glad that the authors have taken up this line of research.\", \"Cons/questions:\", \"How sensitive is the system to the training problems from which it forms analogies? For instance experiment 3 hinges on two particular seed expressions--how did you choose them?\", \"How much do you depend on the particular feature space? Appendix F lists what seem like a relatively impoverished feature representation for the state space. Is this intentional, whereby narrowing the feature space you improve transfer between the target (training) and source (test) proofs? In that case it would seem that the method would struggle absent careful feature engineering, which could limit its applicability beyond the simple theories considered here (nb: although the theories are simple the proofs are extraordinarily long)\"], \"minor_concerns\": \"typo page 3 - \\\"allows for potential reusing\\\" (potential to potentially)\\nmissing citation page 3 - \\\"Despite the unquestioned role of...\\\" (include citation to \\\"Mathematics and Plausible Reasoning\\\")\\nmake figure one larger\\nfigures 2/3: surely you must be doing some kind of kernel density smoothing here?\\nexperiment 1/table 2: describe E2--which is at ceiling--as exploiting hand-crafted domain-specific strategies for arithmetic, preferably in the caption. Otherwise it is confusing why it is at ceiling just by looking at the table\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Does reinforcement learning include reasoning by analogy?\", \"review\": \"This work introduces a new algorithm FLoP for theorem proving using reinforcement learning, and tests it on a new evaluation dataset. FLoP gives direction to a tableau based theorem prover by learning a state machine using curriculum learning applied to a prototype proof. The authors state that this RL technique has not been previously applied to theorem proving. I cannot judge this statement but if true it would seem enough novelty to justify publication. They find that the technique works best on highly structured problems such as proving simple arithmetic statements in unary or binary arithmetic, and less well for problems which benefit from searching through databases of heterogeneous statements.\\n\\nOne could debate whether this deserves the name of \\\"reasoning by analogy\\\". I suspect it should be called \\\"reasoning by imitation\\\". To my mind, the term analogy suggests a reasoning process in which some features are extracted from the proof, and then the proof strategies which work for these feature values are selected out of a large set of possibilities including many with different feature values. I quote from the authors' description at the beginning of section 6 of what they show: \\\"In this highly structured dataset FLoP is capable of extracting a general proof pattern from one or two proofs and successfully generalizing to related proofs of arbitrary length.\\\" This does not sound like analogy as I defined it, rather it sounds like imitating the prototype. \\n\\nStill, since the comparison with other techniques is encouraging, and since the paper is clearly written and gives a very extensive survey of comparable works, I found it enlightening and would recommend to accept it.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
PmUGXmOY1wK | GL-Disen: Global-Local disentanglement for unsupervised learning of graph-level representations | [
"Thilini Cooray",
"Ngai-man Cheung",
"Wei Lu"
] | Graph-level representation learning plays a crucial role in a variety of tasks such as molecular property prediction and community analysis. Currently, several models based on mutual information maximization have shown strong performance on the task of unsupervised graph representation learning. In this paper, instead, we consider a disentanglement approach to learn graph-level representations in the unsupervised setting. Our work is the first to study disentanglement learning for graph-level representations. Our key observation is that the formation of many real-world graphs is a complex process with global and local generative factors. We hypothesize that disentangled representations which capture these global and local generative factors into independent latent units can be highly beneficial. Specifically, for graph-level representation learning, our disentanglement approach can alleviate distraction due to local variations of individual nodes or individual local neighbourhoods. We propose a VAE based learning algorithm to disentangle the global graph-level information, which is common across the entire graph, and local patch-level information, which varies across individual patches (the local subgraphs centered around the nodes). Through extensive experiments and analysis, we show that our method achieves the state-of-the-art performance on the task of unsupervised graph representation learning.
| [
"Unsupervised Graph Representations",
"Disentanglement Learning",
"GNN",
"Unsupervised Learning"
] | Reject | https://openreview.net/pdf?id=PmUGXmOY1wK | https://openreview.net/forum?id=PmUGXmOY1wK | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"RxmaaOxG29",
"7ADxrBnQEz6",
"O74iTYiIaf7",
"fsYKwjVyv2O",
"UhTCU6RsCoz",
"ZjU36EONPuZ",
"UBPGWtv3vi",
"y54yPEiMDFd",
"Inpq8RfLVm",
"2AGEEdmjHHM",
"lhz5oieahAg",
"GUXn8g9b07K",
"Ipxx-O8928w",
"1G0g_rkgyHU",
"Fj8gwSfyf9X",
"SgW4VFH4-l4",
"PFKFeGqZfVx",
"R5l4n8GZ5UA",
"vsuMMcuTShW",
"22cInD8Aeh_",
"KU2nbhXmxhW"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040388507,
1606280388646,
1606262155911,
1606261578302,
1606261063318,
1606258210244,
1606258059743,
1606253844342,
1606185393265,
1606178724772,
1606177017719,
1606176650258,
1606176058811,
1606175127830,
1606174412924,
1606173902804,
1604641427699,
1604037383370,
1603871361060,
1603857775403,
1603849154076
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3401/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3401/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3401/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3401/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3401/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3401/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3401/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3401/AnonReviewer5"
],
[
"ICLR.cc/2021/Conference/Paper3401/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3401/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3401/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3401/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3401/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3401/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3401/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3401/AnonReviewer5"
],
[
"ICLR.cc/2021/Conference/Paper3401/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3401/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3401/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3401/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"In this paper, the authors designed a disentanglement mechanism for global and local information of graphs and proposed a graph representation method based on it. I agree with the authors that 1) considering the global and local information of graphs jointly is reasonable and helpful (as shown in the experiments) and 2) disentanglement is different from independence.\\n\\nHowever, the concerns of the reviewers are reasonable --- Eq. (2) and the paragraph before it indeed show that the authors treat the global and the local information independently. Moreover, the disentanglement of the global information (the whole graph) and the local information (the patch/sub-graph) is not well-defined. In my opinion, for the MNIST digits, the angle and the thickness (or something else) of strokes can be disentangled (not independent) factors that have influences on different properties of the data. In this work, if my understanding is correct, the global and the local factors just provide different views to analyze the same graphs and the proposed method actually designs a new way to leverage multi-view information. It is not sure whether the views are disentangled and whether the improvements are from \\\"disentanglement\\\". \\n\\nIf the authors can provide an example to explain their \\\"disentanglement\\\" simply as the MNIST case does, this work will be more convincing. Otherwise, this work suffers from the risk of overclaiming.\"}",
"{\"title\": \"Thank you\", \"comment\": \"We thank the reviewer for your valuable time and constructive feedback which helped us to improve our work.\\nThank you for your response and suggestion.\"}",
"{\"title\": \"Response to AnonReviewer2 - Part 3\", \"comment\": \"**Comment 2** answer continued ...\\n\\nFurther evidence that our learned global latent variables carry critical graph level information comes from the evaluation on graph classification task. In Appendix G, we evaluate the impact of different combinations of global/local latent (Eq. 11) on graph level task performance. We observe that using only learned global latent variables ($\\\\lambda$ = 0) achieves the best performance in graph level classification. On the other hand, when $\\\\lambda$ = 1 in Eq. 11, i.e., only local factors are used for graph classification, the performance drops significantly. This shows that global latent variables carry critical graph level information in the generative process. We remark that these global/local representations are learned in unsupervised settings; then the representations are tested in SVM classifiers. \\n\\n\\n---\\n\\n**Comment 3** \\u201cIt seems that the visual results showcase a similar pattern among the local and global factors, despite the difference that the signal is stronger for the local factors\\u201d\\n\\nTo validate that local embedding values are not larger in magnitude (stronger signals) than their global counterparts, here we plot the histograms (Please click on the dropbox link [View Histogram](https://www.dropbox.com/s/nqyuqsub8r8l4wo/mutag_plots_histo_6.pdf?dl=0) as inline images are not supported here.) of both local and global latent patch embeddings for the graph in Fig 2(b) of updated manuscript. Histogram in the left illustrates how global factor representations (embeddings) range which has a value range of (0.2 - 0.4). On the right we illustrate the local latent representation value range of 0.0 - 0.4. From this we can clearly observe that both global factor embeddings and local factor embeddings are in the same value range. Local factors don\\u2019t have stronger values. The only difference is that local factors have larger value variation among patches.\\n\\n**In conclusion, reviewer\\u2019s statement \\u201cdespite the difference that the signal is stronger for the local factors\\u201d is not correct. We remark that the numerical results and interpretations in our analysis are widely used in existing disentanglement learning works [b,c,d].**\\n\\n[a] Representation learning: A review and new perspectives. In IEEE Transactions on Pattern Analysis & Machine Intelligence, 2013\\n[b]beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework, ICLR 2017\\n[c] Disentangled graph convolutional networks, ICML, 2019\\n[d] Unsupervised model selection for variational disentangled representation learning. ICLR 2020\\n[e] InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets, NIPS 2016.\\n\\n---\\n\\nAgain, our apologies if there was anything unclear in our paper which may have caused misunderstanding. We hope our discussion above and our updated paper can clarify the misunderstanding. We humbly request the reviewer to reassess our work in light of these clarifications, and we deeply appreciate reviewer\\u2019s valuable time.\"}",
"{\"title\": \"(Continued) Response to AnonReviewer2 - Part 2\", \"comment\": \"**Answers for the major concerns**\\n\\n**Comment 1** \\u201cThe notion of disentanglement is not well-defined in the first place. In the VAE setting where the hidden factors are stochastic, does disentanglement refer to independence? Or they are orthogonal under a specific measure induced by the graph itself? The claims made by the authors can never be examined rigorously (the visual results do not constitute supportive evidence as I shall discuss later).\\u201d\\n\\nOur apology if our notion of disentanglement is not clear, **but our notion of disentanglement is the common one widely used in existing work [a,b]**. Following [a, b], a disentangled representation can be defined as one where a single latent unit is sensitive to changes in single type of generative factors, while being relatively invariant to changes in other types of factors. \\n\\n**We humbly point out that there are differences between disentanglement and independence, as explained in [b].**\\n\\n**Our methods in examining disentanglement are the same as existing work in disentangle learning.** Measuring correlation among disentangled latent factors (Fig 2(a) of our updated paper) is a commonly used mechanism to evaluate the quality of disentanglement [c,d]. Also calculating pair-wise embedding difference using Mean Absolute Pairwise Difference was first proposed in [b] which we used (Fig 2(b) of our updated paper) to compare the amount of patch wise variation for each latent factor to showcase that MAPD for global factors is low as it is common to all the patches. (In the last answer we experimentally show that both local and global factors lie on the same value range and the signal magnitudes are similar; the reason for high MAPD for local is due to its high variation across patches.)\\n\\nIn addition, we have added more analysis in Sec 4.2.2 in our updated paper where we used a synthetic dataset with a known global generative factor and demonstrate that our global latent factor ($z_g$) is the one which maps to this global generative factor, not the local factor. We also show generated graphs to visualize the impact of our disentangled factors (both global and local) on the GL-Disen\\u2019s generative process to verify that our model indeed disentangles those two factors. We remark that such visualization is common in disentanglement learning work to validate disentanglement (most previous work focused on images) [b, e].\\n\\n---\\t\\n\\n**Comment 2** \\u201cThere is no guarantee that the so-called global and local factors are not confounded. Both the global and local reconstruction terms involve the two types of factors. Given the high expressivity of deep learning models, the local factors can easily manage both tasks, or the global factors are merely enhancing the signals of the local factors. There no mechanism to prevent the cross-terms during the optimization, so the learning process of the global and local factors confounded as a result of how the authors design the objective function.\\u201d \\n\\n\\nOur apology if this is unclear, but as we have clarified above, **we do have a proposed mechanism of accumulation on top of Beta-VAE to disentangle global and local factors (see Figure 1 in our paper), and we have adopted analysis methods in existing disentangling works [b,c,d] to validate that our learned representations are indeed disentangled to a large extent as we discussed in Comment 1. Specifically, in Sec. 4.2.1 in our paper we demonstrate the correlation between our global and local factors are very close to 0.0 showing they are able to capture different variations/generative factors in input data which don\\u2019t correlate.**\\n\\nIn addition, To elaborate that our learned global latent variables carry critical graph level information in the generative process, we refer to sec 4.2.2 - Synthetic graph based experiments and Fig. 4 on the updated manuscript. In Fig 4(b), we show generated sample graphs using disentangled global and local factors. In each row of Fig 4(b), the local latent factors are fixed and in each column the global factors are fixed. When we consider a single row, we could observe that, the edge density of the graph changes with the change of global factors. Although 2 rows have two structurally different graphs (nodes have different neighbourhoods, which was captured by local factors), the global factor has been able to change the edge density of those 2 in a similar manner. If only local factors are necessary and global factors merely enhance signals of local factors, then every graph in the same row should look alike. This shows that graph level generative information is captured by global latent variables. Therefore the global latent variables are necessary for the generative process.\"}",
"{\"title\": \"(Continued) Response to AnonReviewer2 - Part 1\", \"comment\": \"We thank the reviewer for the useful feedback. We have updated our manuscript to clarify the unclear areas you have mentioned and discuss in detail here.\\n\\n\\n### **Clarifications about the novelty of this work**\\n\\n\\nRegarding reviewer's comment \\u201cThe setting of the problem adopts the graph VAE setting in [1,2] (which I think the authors should mention in the related work), and the ELBO & local aggregation (convolution) approaches used in this paper are relatively standard in the generative modelling and graph representation learning domain.\\u201d\\n\\nThank you for pointing out that we have overlooked the citation in places where we have referred GVAE in the paper. We fixed that. \\n\\nOur apologies if our writing is not clear, but the main contribution of our work is global/local disentanglement for graph representation learning. **The techniques mentioned by the reviewer: graph VAE (Kipf and Welling), ELBO, convolution are our backbone system and basic components. Critically, these techniques mentioned by the reviewer (i.e. graph VAE, ELBO, convolution) are not sufficient to achieve global/local disentanglement for graphs.** Therefore, our main contribution is a new method built on top of these techniques mentioned by the reviewer **and other critical ideas to achieve global/local disentanglement for graphs, as explained below.**\\n\\nWe have updated the manuscript and we hope this can clear any misunderstanding (end of Sec 3). In particular, our method is built on top of Beta-VAE [Higgins et al. 2017], which has been applied mostly to images. However, there is a key difference between Beta-VAE and our work. As reviewer may know, **Beta-VAE baseline cannot discover global factors automatically:** Beta-VAE discovers independent latent factors, but there is no way for a baseline Beta-VAE to understand if these factors are global / non-global in the unsupervised setting. Usually, some manual inspection is performed on the learned latent variables. E.g., for images, one needs to perform traversal of individual latent variables one by one, and observe their effects (e.g. change in azimuth). Please see Beta-VAE Figure 2. Not only this would require manual effort but also this could be very difficult for graphs, since many graphs represent very specialized knowledge, e.g. protein-protein interaction, and it is very difficult to understand the observed effects and determine which factors are global/local. \\n \\nIn our work, we add on top of Beta-VAE **an accumulation step** for the GNN encoder outputs of vertices belonging to the same graph (see Fig 1 in our paper). This forces **automatic** emergence of the global factors - common information across all the vertices. This mechanism is critical for our idea to extract representation for the whole graph, and we are able to capture global factors without a priori knowledge of the generative factors.\\n\\n**Regarding another paper [2], they did not propose any idea related to disentangling,** and it is about a variation of GVAE for growing graphs. While they propose a variation of GVAE for growing graphs where the generation of each new node is conditioned on all existing nodes, **we aim at a variation of GVAE which is capable of disentangling global and local factors from a fixed graph. [2] does not have a mechanism for capturing global latent information** as their adaptive ELBO in eq 7 [2] term 2 is node wise unlike our global KL divergence single term for the entire graph in our ELBO (2nd term eq 6 of our paper). Due to this, our work is significantly different from [2]. \\n\\n\\n[2] Xu, Da, et al. \\\"Generative graph convolutional network for growing graphs.\\\" ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2019\"}",
"{\"title\": \"Response to AnonReviewer1 - Part 2\", \"comment\": \"**Comment 2 - Comparison with existing disentangling methods**\\n\\nThank you for pointing out these references. We have carefully gone through these works. Detailed comparison can be found in Appendix M and we have updated our related work section to include these. Overall, our idea of global-local disentanglement is distinctive to these works.\\n\\nAlthough the graph-graph proximity [Bai et al. IJCAI\\u201919] is aimed at unsupervised learning, their approach is different and they did not propose any ideas related to disentanglement. The other major difference this method has with GL-Disen and all other latest graph learning models we have compared is that, this uses a pairwise graph comparison mechanism to learn graph level similarities (Bai et al. IJCAI\\u201919 Fig 1.(b)). Our proposed GL-Disen does not require this expensive comparison to learn good graph representations, as our method removes irrelevant local information via disentanglement.\\n\\nFactorGCN [Yang et al. NIPS 20] does not have any mechanism to learn global level factors which are common to the entire graph. Although it can learn a set of factors under which nodes are connected to each-other, they cannot determine which factors are locally important and which are globally relevant. On the other hand, FactorGCN is a supervised model (sec 3.4 FactorGCN [Yang et al. NIPS 20]) while our work GL-Disen is unsupervised.\\n\\nNED-VAE [Guo et al.KDD 20] also does not disentangle factors common for the entire graph (global factors) from the factors specific for each local patch. Their unsupervised disentangle mechanism aims at disentangling node features, edge features and node-edge joint. Using the loss function term A (Sec 4.2.2 [Guo et al.KDD 20]) they try to make node, edge and node-edge joint features independent of each-other. We do not impose such restrictions in our model as we only want to separate out features common for the entire graph and features specific to local patches. Our model has the flexibility of using either node or edge or joint features and extract globally relevant information from any of these, while also separating out local features which are specific for patches. Their node-edge joint representation is like a combination of both nodes and edges . In Fig 3 [Guo et al.KDD 20], third column samples, it seems like while node and edge factors have disentangled graph features (first 2 columns of Fig 3), node-edge joint factor has entangled them again (Both edge density and node values change with it). We believe this is expected, as it is mainly used to inform node and edge decoders about the structure of that particular individual graph (since node and edge encoders have no other mechanism to share information among them). Basically it captures the uniqueness of each individual graph with respect to node and edge features and structures, while our global latent captures the common global generative factors which the entire graph dataset was generated from. For GL-Disen, we demonstrate in Appendix G that removing individual node/edge/patch specific local information using disentanglement and separating out global factors is beneficial for graph level tasks such as classification.\\n\\n---\\n**Comment 3** \\u201c The paper mentioned that the global and local latent generative factors are sampled from their respective posterior distributions. More details are expected.\\u201d\\n\\nThe outputs from the encoder of our GL-Disen are two sets of parameters for each patch in the input graph. First set is the mean and variance values of the individual distributions of local latent factors. Each patch in the graph has its own posterior distribution. Then we sample a local latent representation for each patch from these individual posterior distributions. Since each patch/node can be different from one another, sampling from individual distributions separately is reasonable. The second set of mean and variance parameters output from GL-Disen encoder are, to represent a posterior distribution which is common for all the patches of the graph: the distribution for global latent factors. Unlike patch-wise individual distributions for local latent, global latent has only a single distribution for the entire graph (We include an accumulation operation in Eq. 9 of the paper to obtain a single posterior distribution). Then we sample a single global latent variable from this distribution as the global latent representation which is used for graph level tasks.\\n\\n---\\nWe are trying to conduct additional experiments for other tasks as suggested by the reviewer.\"}",
"{\"title\": \"(Continued) Response to AnonReviewer1 - Part 1\", \"comment\": \"We thank the reviewer for the useful feedback\\n\\n**Comment 1 - Evaluation Tasks**\\n\\nIn our paper, one main reason for selecting graph classification is as follows. Currently, infomax principle based methods dominate unsupervised graph-level representation learning, and results are reported for the graph classification task (InfoGraph[1], CMV[2]). Therefore, we focus on this task so that we can compare to their results properly, to understand how our proposed global-local disentanglement approach compares with the infomax approach for unsupervised graph-level representation learning. \\n\\nWhile state-of-the-art infomax methods like InfoGraph[1], CMV[2] aggregate all information from all the patches to generate the global representation, our proposed GL-Disen has an explicit specialized mechanism to remove irrelevant local information for graph level representations (without aggregating all); that is disentangling. We wanted to evaluate performance of this approach: explicit removal of irrelevant local information and retain of global information, and how this approach compares with methods which have no explicit mechanisms to remove irrelevant local information, for graph level tasks. \\n\\nIn addition, in the updated manuscript Sec 4.3, we have extended our experiments for node level tasks as well. As the focus of our paper is on disentangling global graph-level latent representations and local patch-level representations, we conducted focused experiments to show how these disentangled information affects node level tasks and graph level tasks. For graph level tasks, we show superior performance when using only global representation, and this validates our hypothesis that local information is distractive for graph-level tasks. For node-level tasks, interestingly, we show better performance when local representation is combined with some global information ($z_g$). This observation is consistent with recent work GraphWave[3] which has stated that identifying distant nodes with similar neighbourhood structures is a strong fact for node level task performance. Our combined method increases the performance due to the fact that $z_g$ has been able to capture those long distance similarities. Although recent methods such as DGI [4] have claimed that their \\u201cderived patch representations are driven to preserve mutual information with the global graph summary, this allows for discovering and preserving similarities on the patch-level\\u201d for long distance, they neither have empirical analysis nor capability to explicitly evaluate the optimal amount of global information required. In our work, our global/local disentangled representations from GL-Disen enable us to explicitly control the amount of global/local information and combine them, and we are able to explicitly demonstrate that combination of global/local can achieve the best performance for node level task, please see our Table 6 (Appendix I) of our experiments.\\n\\nWe also added Sec 4.2.2 to the updated manuscript to elaborate more on the explanations of our disentangled factors.\\n\\n[1] InfoGraph: Unsupervised and Semi-supervised Graph-Level Representation, ICLR 2020\\n[2] Contrastive Multi-View Representation Learning on Graphs, ICML 2020\\n[3] Learning Structural Node Embeddings via Diffusion Wavelets, KDD 2018\\n[4] Deep Graph Infomax, ICLR 2019\"}",
"{\"title\": \"Response to AnonReviewer3 - Further clarification\", \"comment\": \"We thank again the reviewer for the useful feedback and pointing out [1].\\n\\nWe would like to highlight that in our revised paper we have included new discussion regarding the difference between our work and Beta-VAE (Higgins et al., 2017). Please find the discussion at the end of Section 3.\\n\\nWe hope our response has clarified reviewer's concern especially related to [1] Charakorn, Rujikorn, et al. \\n\\nWe would be grateful if reviewer can reassess our work in light of these clarifications, and we deeply appreciate reviewer\\u2019s valuable time.\"}",
"{\"title\": \"Thanks for the responses\", \"comment\": \"I thank the authors for their detailed responses and for revising the paper substantially. I believe the paper has improved its quality after the revision and thus improve my score accordingly.\", \"a_quick_follow_up_comment\": \"though I appreciate the synthetic dataset added in the revision, estimating the parameter p in the ER graph may be too simple since it can be estimated directly by the number of edges / the number of nodes^2. Adopting more complicated synthetic datasets, e.g., the stochastic block model or the forest fire model, will make the results more convincing.\"}",
"{\"title\": \"Rebuttal Paper Revision\", \"comment\": \"We thank all the reviewers for their valuable feedbacks.\\n\\nUpdated manuscript contains analysis, experiments and explanations requested by our reviewers.\\nAll additions and updates to the original submission are indicated in blue for easy reference.\\n\\n**Updates to the main paper**\\n1. End of Sec. 3 - Comparison with $\\\\beta$-VAE \\n2. Fig 2 - (a) Added the GVAE reference of entangled method for correlation comparison\\n3. Sec. 4.2.1 - Added the GVAE reference of entangled method for correlation comparison\\n4. Sec 4.2.2 - Added a Synthetic dataset and experiments to elaborate intuitive meaning of disentangled factors\\n5. Section 4.3 - Node level task evaluation\\n\\n**Updates to the Appendix**\\n1. Appendix C - Model complexity Analysis\\n2. Appendix F - Moved the comparison of GL-Disen with Kernel methods to here and discussion of pros and cons of Kernel vs GNN\\n3. Appendix H - How can this method prove that each factor is necessary for the generative process?\\n4. Appendix I - Detailed analysis of Node level task\\n5. Appendix A - added more detailed discussion comparing existing disentangled methods to ours\"}",
"{\"title\": \"Response to AnonReviewer4 - Part 3\", \"comment\": \"**Q4: What is the real meaning of each factor?**\\n\\nIn our analysis discussed in sec 4.2.2 - Synthetic graph based experiments, we show that learned global latent variable captures the graph-level generative factor of the Erdos-Renyi graphs, i.e., probability p for the synthetic graph to include an edge between node i and node j. Furthermore, the learned local latent captures the local randomness in the generative process of Erdos-Renyi graphs. \\n\\nHowever, for many real-world complex data/graphs, meanings of generative factors are not available (Higgins et al. (2017)), e.g. global/local generative factors for molecular graphs are not accessible except for molecular scientists. On the other hand, our method requires no a priori knowledge of these factors. In particular, even though there is no knowledge regarding the meaning of the global / local factors, our method can capture global factors into a representation, excluding the local factors. This representation is sufficient as our main focus is on graph level tasks.\\n\\n\\n**Q5: Complexity Analysis**\\n\\nWe like to discuss the time and space complexity of GL-Disen compared to our baseline GVAE. Most of the computation complexity comes from the GNN encoder (Eq.8) where the time and space complexity is $O(V^2)$ for a single GNN layer with $V$ number of nodes in the graph and for GNN with $N$ layers, it becomes $O(V^2N)$. Only difference between GL-Disen encoder and GVAE encoder is that due to disentangling, and GL-Disen encoder requires output of two different parameter sets for global and local posterior distributions instead of one as in baseline GVAE. Therefore we need an additional 2 GNN layers. Since it is a constant addition, the overall complexity stays at $O(V^2N)$ scale. The decoder complexity for both GVAE and GL-Disen is $O(V^2)$ with adjacency reconstruction being the dominant component (Eq.10). The two additional steps GL-Disen have for the disentanglement are as follows (in between encoder and decoder): (A) Accumulating using Eq.9 and (B) combining global and local samples to feed to the decoder. Both step (A) and (B) are linear operations during both training and inference with the complexity of $O(V)$ in both time and space. Compared to the high complexity of the GNN encoder and decoder common for both GVAE and ours, this linear increment to disentanglement is not significant. \\nIn the updated manuscript, we have included this time/space complexity analysis of our method.\"}",
"{\"title\": \"(Continued) Response to AnonReviewer4 - Part 2\", \"comment\": \"**Point 3** Answers for each question:\\n\\n**Q1: What is the best number of generative factors which is important for this method?**\\nEmpirically, we have performed experiments on the effects of different latent variable dimensions (Appendix J.3, was Appendix E.3 in our original submission).\\n\\nAnalytically, however, it is very difficult to derive the best number of generative factors. For many complex data such as molecular graphs, a priori knowledge of the underlying generative factors is not available. Also, in our unsupervised setup, we do not get any supervision from downstream tasks during training.\\n\\nOur main focus of GL-Disen is to separate out the group of generative factors as local and global, i.e. global/local disentanglement. Then we extract graph level representations from the set of global factors. **Importantly, there is no need for us to explicitly separate each one of global and local factors from those sets individually.**\\n \\n**Q2: Can this method occur mode collapse, how to valid or prevent it?**\\nThe scope of this work is on evaluating how disentanglement can be equipped for learning graph level representations and we proposed a GVAE based approach as a proof of concept. Indeed posterior collapse is a fundamental problem for VAE. Our method builds on top of GVAE, therefore our work may suffer a certain amount of posterior collapse. There are fundamental works to address posterior collapse [1,2]. In principle we can integrate such ideas into our system to alleviate mode collapse and this may further improve the performance. \\n[1] Preventing posterior collapse with \\u03b4-VAES, ICLR\\u201919\\n[2] Avoiding Latent Variable Collapse with Generative Skip Models, AISTATS\\u201919 \\n\\n**Q3: How can this method prove that each factor is necessary for the generative process?**\\nIn what follows, we discuss how we validate that our learned local and global latent variables carry critical information for the generative process.\\n\\nAs we discussed, our main focus of GL-Disen is to separate out the group of generative factors as local and global, i.e. global/local disentanglement, and that is sufficient for our task. There is no need for us to explicitly separate each one of global and local factors from those sets individually.\\n\\nTo evaluate the necessity of our local/global latent variables, first we calculated the node feature reconstruction error for MUTAG dataset and obtained the following results. MSE when both global and local factors fed to the decoder is 0.03256 and it increases to 0.03654 when global factors are removed (local only). When only global factors are fed to the decoder (global only), the error further increases to 0.08329. From these errors, we can observe that local latent factors have the largest impact on the generation of the node features. This is expected as global factors are common for all the patches for a given graph. Therefore, to reconstruct the node features (which differ from node to node), local factors are crucial. However, we can observe from the difference of full model and local only errors, that our model does not ignore the global factors during node feature generation. Hence showing it is also necessary. \\n\\nNext, we show that our learned global latent variables carry critical graph level information in the generative process. We refer to sec 4.2.2 - Synthetic graph based experiments and Fig. 4 on the updated manuscript. In Fig 4(b), we show generated sample graphs using disentangled global and local factors. In each row of Fig 4(b), the local latent factors are fixed and in each column the global factors are fixed. When we consider a single row, we could observe that, the edge density of the graph changes with the change of global factors. Although 2 rows have two structurally different graphs (nodes have different neighbourhoods), the global factor has been able to change the edge density of those 2 in a similar manner. If only local factors are necessary, then every graph in the same row should look alike. This shows that graph level generative information is captured by global latent variables. Therefore the global latent variables are necessary for the generative process. \\n\\nFurther evidence that our learned global latent variables carry critical graph level information comes from the evaluation on graph classification task. In Appendix G, we evaluate the impact of different combinations of global/local latent (Eq. 11) on graph level task performance. We observe that using only learned global latent variables ($\\\\lambda$ = 0) achieves the best performance in graph level classification. On the other hand, when $\\\\lambda$ = 1 in Eq. 11, i.e., only local factors are used for graph classification, the performance drops significantly. This shows that global latent variables carry critical graph level information in the generative process. We remark that these global/local representations are learned in unsupervised settings; then the representations are tested in SVM classifiers.\"}",
"{\"title\": \"(Continued) Response to AnonReviewer4 - Part 1\", \"comment\": \"We thank the reviewer for the useful feedback.\\n\\n**Point 1** \\nThank you for pointing out these related works [1], [2]. Sorry for unclear, **but in our original submission, we have already compared GL-Disen with Contrastive multi-view [1] in Table 1 (CMV) and discussed it in our Section 2 - Related work.** Following the same experiment setup, our proposed method outperforms CMV consistently across all datasets.\\nRegarding [2] self supervised training on GCN paper, they propose self-supervised strategies such as link removing and feature covering to improve GCN\\u2019s feature learning ability. In particular, **[2] does not propose any mechanism for graph level representation learning** as GCN is already known for its patch level feature learning ability. The focus of our work is graph level representation using a new global/local disentanglement approach. \\n[1]Contrastive Multi-View Representation Learning on Graphs, ICML 2020\\n[2]Self-supervised Training of Graph Convolutional Networks. Arxiv 2020\\n\\n---\\n\\n**Point 2** \\nApologies if we did not explain the definition of global and local factors clearly. The definitions of global/local are as we mentioned in the abstract: \\u201cWe propose a VAE based learning algorithm to\\ndisentangle the global graph-level information, which is common across the entire\\ngraph, and local patch-level information, which varies across individual patches\\n(the local subgraphs centered around the nodes).\\u201d We have some explanation in the introduction, but we agree more explanation is better.\\n \\nIn the revised manuscript, following the reviewer\\u2019s suggestion, we have added new synthetic graph analysis for better explanation of global/local, pls see Section 4.2.2. The synthetic graphs are Erdos-Renyi ER graphs. The ER(n,p) graphs are synthetic graphs with two global generative factors: number of nodes n and a parameter p \\\\in [0, 1] for the synthetic graph to include an edge (i, j) for 1 <= i < j <=n with probability p. In our experiment, we focus on parameter p, as n is too easy to learn. Therefore, we create a training dataset of ER(n,p) with fixed n and varying p. In this dataset, the global generative factor is p and the local factor is local randomness. We verify that our method can discover global / local factors by performing traversal of the learned latent variables, similar to other disentanglement learning work such as Beta-VAE [a], InfoGAN [b] (but they focused on images). We emphasize that our setup is **unsupervised**: the generative factor p is unknown to our method, and our method discovers this global factor.\\n\\nFor real-world data, a priori knowledge of global/local factors is usually not available, e.g. molecular graphs, which are very specialized. For graphs modeling a Reddit discussion thread, one of the global factors could be the topic of their comments, because all the users have discussed about it. But it is very difficult to know the exact set of global/local factors, similar to other real-world graph data. \\n\\nIt should be noted that our method does not require a priori knowledge of global/local factors. Our method disentangles the factors into global and local, captures the global factors into graph representation, and this is sufficient for many graph level tasks such as classification. There is no need for us to understand the semantics of these global factors. \\n \\n[a]beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework, ICLR 2017\\n[b] InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets, NIPS 2016\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"We thank the reviewer for the useful feedback.\\n\\n**Q1**: \\u201cthe follow paper[1] exactly disentangle local and global information into two separate sets of latent variables within the VAE framework. It seems that migrating this idea under graph is straightforward.\\u201d\\n[1] Charakorn, Rujikorn, et al. \\\"An Explicit Local and Global Representation \\u2026\\u201d\\n\\nWe thank reviewer for pointing out [1]. We read it very carefully. **We believe [1] cannot be applied to our problem** (if we overlook some details, we hope to receive reviewer\\u2019s feedback and we deeply appreciate reviewer\\u2019s comment).\\n\\n[1] has an interesting and simple idea. The key idea is to produce auxiliary data and to pass that into another VAE (Fig.2). In their approach, **producing aux data is crucial and not easy** ([1], Sec 3): aux data needs to be created such that it contains only local information, excluding global information. Even for images, they need to do that meticulously, choosing different patch sizes manually for different datasets (Fig 3), so that they can (i) preserve local correlations between pixels within each patch (ii) reduce global long-range correlations between pixels (sec 3.1). [1] has done that **manually** for different datasets (sec 4.2.2, last paragraph): \\u201clarger patch sizes can result in z_l occasionally being used to represent the digit identity\\u201d (thus (ii) cannot be satisfied); \\u201cfor CelebA, by visual inspection, we find that a patch size too small can worsen the disentanglement\\u201d ((i) cannot be satisfied).\\n\\nIn our problem, we do not want to perform such meticulous tuning and supervision. In fact, we follow the problem setup in [Higgins et al. 2017]: Unsupervised disentangled learning of complex data where no a priori knowledge of the generative factors exists, and little to no supervision for discovering the factors is available. (our apology if this is unclear)\\n\\nEven we change our problem setup and allow meticulous manual supervision, graphs are significantly much more difficult than images. For example, if we consider datasets like MUTAG or PTC which contains graphs representing properties like heteromatic nitro and carcinogenicity, a person needs thorough knowledge of the domain to understand what these means and what graph structures are meaningful and what are not. Therefore, it is very difficult to produce aux data in order to apply the methods in [1]: what transformation can be used to retain local corrections and reduce global long-range correlations? How could we know it is a good transformation so that we can produce the aux data and pass it as input for [1]? For images, human can view the data and perform tuning ([1], Fig 3); For graphs, it is not easy for human to assess, specially some graphs are very specialized such as those in MUTAG or PTC.\\n\\n**This discussion highlights some subtle but important details of global / local disentanglement for graphs, which our work is the first to study.** In our work, we add an accumulation step on top of Beta-VAE to force the emergence of global information (our Fig 1). This accumulation step allows us to disentangle global / local information in unsupervised settings, extract graph level representation from the global information, and we do not need to understand the complex meanings of the specialized graph datasets. Our only assumption is that graph level generative factors produce common effects to all vertices of the same graph. \\n\\nPlease see our response to AnonReviewer5 \\u201cNote about novelty\\u201d regarding the difference between our work and Beta-VAE.\\n\\n---\\n\\n**Q2**. \\u201cIn Figure 4 \\u2026 Does that mean the local factor contribution little to the overall performance?\\u201d\\n\\nYes, reviewer is correct and this is our message. Fig 4 (Fig 5 of updated manuscript) shows graph classification accuracy - a global task. With global/local disentanglement, local latent factors capture only local variations which are distractions to a global task. Therefore, not including them and using only global representation (red line) achieves the best performance.\\n\\nWe thank the reviewer for comments, especially careful review including Appendix.\"}",
"{\"title\": \"Response to AnonReviewer5 - Part 2\", \"comment\": \"4. We updated Figure 2 (a) to compare the disentanglement results by GL-Disen with our baseline GVAE which does not have any disentangling capability of global and local level information. The main difference between GL-Disen and GVAE is that GL-Disen has an accumulating module to enable global/local disentanglement. The network architectures and training parameters are the same. We observed that correlation of global and local latent variables for GL-Disen is almost 0.0, while that for GVAE is considerably higher around 0.4. This is because GVAE neither has an explicit mechanism to determine global and local information nor anyway to achieve global/local disentanglement.\\n---\\n\\n**Answers for other questions:**\\n\\n5. The scope of this work is on evaluating how disentanglement can be equipped for learning graph level representations. Indeed posterior collapse is a fundamental problem for VAE. Our method builds on top of GVAE, therefore our work may suffer a certain amount of posterior collapse. There are fundamental works to address posterior collapse [1,2]. In principle we can integrate such ideas into our system to alleviate mode collapse and this may further improve the performance. \\n[1] Avoiding Latent Variable Collapse with Generative Skip Models AISTATS 2019\\n[2] Preventing posterior collapse with \\u03b4-VAES, ICLR 2019 \\n \\n6. One of the major aspect of kernel methods is they use manual processes (graph traversals like depth first search) to find all possible paths for substructures like random walks, trees or graphlets. Then they compare all those pairs of paths in each pair of graph to calculate kernel values to find similarities. This is a very expensive operation. However for small graphs this gives better results as it covers all possible neighbourhoods. However as the GCKN [f] paper mentions, when there are very large dense graphs, they are unable to extend this method. This can be a reason that kernel based methods do not evaluate on denser datasets like Reddit. On the other hand, GNNs achieve efficiency by eliminating from manual path and graph to graph pairwise comparison and reducing neighbourhoods for only random walks. However even with limited neighbourhood, we could see that GNNs specially with our disentanglement mechanism have been able to achieve almost similar performance. \\n\\n7. We like to discuss the time and space complexity of GL-Disen compared to our baseline GVAE. Most of the computation complexity comes from the GNN encoder (Eq.8) where the time and space complexity is $O(V^2)$ for a single GNN layer with $V$ number of nodes in the graph and for GNN with $N$ layers, it becomes $O(V^2 N)$. Only difference between GL-Disen encoder and GVAE encoder is that due to disentangling, and GL-Disen encoder requires output of two different parameter sets for global and local posterior distributions instead of one as in baseline GVAE. Therefore we need an additional 2 GNN layers. Since it is a constant addition, the overall complexity stays at $O(V^2 N)$ scale. The decoder complexity for both GVAE and GL-Disen is $O(V^2)$ with adjacency reconstruction being the dominant component (Eq.10). The two additional steps GL-Disen have for the disentanglement are as follows (in between encoder and decoder): (A) Accumulating using Eq.9 and (B) combining global and local samples to feed to the decoder. Both step (A) and (B) are linear operations during both training and inference with the complexity of $O(V)$ in both time and space. Compared to the high complexity of the GNN encoder and decoder common for both GVAE and ours, this linear increment to disentanglement is not significant. \\n\\n8. At the beginning of this response, we discuss the difference between our method and Beta-VAE, which enables discovery of global factors in the unsupervised setting. \\n\\n[a] InfoGraph: Unsupervised and Semi-supervised Graph-Level Representation, ICLR 2020\\n[b] beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework, ICLR 2017\\n[c] InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets, NIPS 2016\\n[d] Deep Graph Infomax, ICLR 2019\\n[e] Learning Structural Node Embeddings via Diffusion Wavelets, KDD 2018\\n[f] Convolutional Kernel Networks for Graph-Structured Data, ICML 2020\"}",
"{\"title\": \"(Continued) Response to AnonReviewer5 - Part 1\", \"comment\": \"We thank the reviewer for the constructive feedback.\\n\\n**Note about novelty:**\\n\\nTo our best knowledge, global/local disentanglement for graph-level representation learning is novel, as reviewer mentions.\\n\\nReviewer is correct that our work is based on Beta-VAE and graph VAE which are known in the literature. But we would like to highlight one key difference which is critical for this whole work (our apologies, our writing is not clear to highlight this). As reviewer knows, **Beta-VAE baseline cannot discover global factors automatically:** Beta-VAE discovers independent latent factors, but there is no way for a baseline Beta-VAE to understand if these factors are global / non-global in the unsupervised setting. Usually, some manual inspection is performed on the learned latent variables. E.g., for images, one needs to perform traversal of individual latent variables one by one, and observe their effects such as change in azimuth (Beta-VAE, Figure 2). Not only this would require manual effort but also this could be very difficult for graphs, since many graphs represent very specialized knowledge, e.g. protein-protein interaction, and it is difficult to understand what factors are global/local. \\n \\nIn our work, we add on top of Beta-VAE **an accumulation step** for the GNN encoder outputs of vertices belonging to the same graph (see Fig 1 in our paper). This forces the emergence of the global factors - common information across all the vertices. This mechanism is critical for our idea to extract representation for the whole graph, and we are able to capture global factors without a priori knowledge of the generative factors.\\n \\nWe have updated the manuscript to make this clear (end of Section 3).\\n\\n---\\n\\n**Answers for the Major concerns:**\\n1. We updated our main experiment results in Table 1 with GVAE baseline (Kipf and Welling) for all the datasets and showed consistent and considerable improvement we obtained via the disentanglement mechanism in our GL-Disen. Note that, without disentanglement, baseline GVAE performs poorer than recent work InfoGraph [a] in most cases, while with the addition of disentanglement our GL-Disen outperforms InfoGraph consistently. GVAE baseline and GL-Disen use the same backbone networks and training parameters for fair comparison. \\n\\n2. In order to discuss the intuitive meaning of what global latent variables capture, as reviewer suggested, we added analysis with a synthetic dataset in Section 4.2.2. Specifically, we perform experiments on Erdos-Renyi ER graphs. The ER(n,p) graphs are synthetic graphs with two global generative factors: number of nodes n and a parameter p \\\\in [0, 1] for the synthetic graph to include an edge (i, j) for 1 <= i < j <=n with probability p. In our experiment, we focus on parameter p, as n is too easy to learn. Therefore, we create a training dataset of ER(n,p) with fixed n and varying p. We pass this training dataset to our GL-Disen method. We demonstrate that GL-Disen can discover the generative factor p using the training dataset only. In particular, we follow previous work such as Beta-VAE [b], InfoGAN [c] to perform traversal of the latent variables. We show that our representation is disentangled: our global latent variable ($z_g$) captures the global generative factor p, and local latent variable captures local randomness. We emphasize that our setup is unsupervised: the generative factor p is unknown to our method, and our method discovers this global factor.\\n\\n3. To evaluate the impact of GL-Disen has on node level, we added section 4.3 for node classification tasks. We observe that for graph level tasks, completely removing local information achieves the best accuracy (Appendix G), and this validates our hypothesis that local information distracts graph level tasks. For node level tasks, we found out some combining of local and global information achieves the best performance. This is consistent with the observation in recent work GraphWave[e] which has stated that identifying distant nodes with similar neighbourhood structures is a strong fact for node level task performance. Our combined method increases the performance due to the fact that $z_g$ has been able to capture those long distance similarities. Although methods like DGI [d] have claimed that their \\u201c derived patch representations are driven to preserve mutual information with the global graph summary, this allows for discovering and preserving similarities on the patch-level\\u201d for long distance, they neither have empirical analysis nor capability to determine the optimal amount of global information required. In contrast, GL-Disen disentangles local information and global information apart. These disentangled representations allow us to explicitly control the amount of global/local information and combine them, please see our Table 6 (Appendix I) of our experiments.\"}",
"{\"title\": \"An interesting idea, but it is unclear whether the improvement really comes from disentanglement\", \"review\": \"Summary:\\nThis paper proposes an unsupervised graph-level representation learning method considering global-local disentanglement. Specifically, the authors propose a GL-Disen model based on graph VAE architecture to jointly learn global and local representations for a graph. The global information is shared across the whole graph while the local information varies from patch to patch, corresponding to common and local factors, respectively. Empirical experimental results show that the learned representation achieves superior performance in the downstream graph classification task, and analyses demonstrate the learned representations exhibit some disentangle property.\", \"pros\": \"1. Unsupervised graph representation learning considering global and local disentanglement seems to be a novel problem. \\n2. The proposed method generalizes disentangled VAE into graph data to disentangle common factors from the local ones. The formulations and model descriptions are clear. \\n3. Experiments, including both qualitative analysis and quantitative results, demonstrate the effectiveness of the learned global factors in downstream tasks.\", \"cons_and_questions\": \"My major concern lies in the insufficiency of experiments. Specifically: \\n1. The disentanglement part is modified from Beta-VAE. Since normal VAE is adopted in graphs (e.g., Variational Graph Auto-Encoders by Kipf and Welling), the authors need to compare these methods to demonstrate the improvement is actually from the disentanglement part rather than the VAE structure. \\n2. Although the authors demonstrate the effectiveness of disentanglement in downstream tasks (i.e., graph classification), it is unclear whether these global factors have intuitive explanations on some of the datasets, e.g., the showcases of molecular graphs in Duvenaud et al., 2015, or the authors may adopt some synthetic datasets. \\n3. Since both the global and local node representations are disentangled, I am curious whether the local node representations can also be validated in some downstream node-level tasks. \\n4. Figure 2 in Section 4.2.1 is not entirely convincing since there is no reference line of how much correlation a non-disentangled method will have (e.g., in Ma et al., 2019, the authors compare the disentangled method with GCN).\", \"other_questions\": \"5. How the proposed method can handle the mode collapse problem, i.e., only a few latent factors learn useful information? \\n6. As shown in Table 1, though the proposed method outperforms other GNNs, it does not always compare favorably to kernel-based methods such as GCKN. The authors may want to further elaborate on the pros and cons of using GNN vs. kernel-based methods. \\n7. There lacks a discussion on the complexity of the proposed method. \\n8. The technical contribution is somewhat limited since both Beta-VAE and graph VAE are known in the literature. It would be more interesting if the authors can integrate local-global disentanglement with local neighborhood disentanglement in Ma et al. 2019 to derive a more novel architecture. \\n\\nI will be happy to improve my scores if authors can address the above questions. \\n \\n \\n========= \\n \\nI have updated my score considering the paper has improved its quality after the revision (adding more experiments/baselines, comparison with the literature, etc.).\\n\\n\\n=========\", \"new_updates\": \"following the new comments of Reviewer 4, I also briefly check the code in the supplementary material and find it indeed seems to have the consistency problem (i.e., not reconstructing graph edges as mentioned in the paper). Thus, I am also wondering how the authors implement Graph-VAE in the rebuttal phase and whether the improvement of their proposed method over Graph-VAE is really from disentanglement or the differences in the autoencoder. Based on this potentially serious problem, I reinstate my original score and think the paper should be clarified before acceptance.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Lack of guarantee that the global and local factors are disentangled, unclear definition, limited novelty\", \"review\": \"The authors propose a VAE-type generative model approach to characterize the hidden factors, with a divided focus on the global and local reconstructions. The claim is that the learnt hidden representations are disentangled (which is not defined clearly) using two reconstruction terms. The setting of the problem adopts the graph VAE setting in [1,2] (which I think the authors should mention in the related work), and the ELBO & local aggregation (convolution) approaches used in this paper are relatively standard in the generative modelling and graph representation learning domain.\\n\\nApart from the limited novelty, which would not have affected my evaluation if it solves the problem as claimed, I have several major concerns about this paper:\\n\\n1. The notion of disentanglement is not well-defined in the first place. In the VAE setting where the hidden factors are stochastic, does disentanglement refer to independence? Or they are orthogonal under a specific measure induced by the graph itself? The claims made by the authors can never be examined rigorously (the visual results do not constitute supportive evidence as I shall discuss later). \\n\\n2. There is no guarantee that the so-called global and local factors are not confounded. Both the global and local reconstruction terms involve the two types of factors. Given the high expressivity of deep learning models, the local factors can easily manage both tasks, or the global factors are merely enhancing the signals of the local factors. There no mechanism to prevent the cross-terms during the optimization, so the learning process of the global and local factors confounded as a result of how the authors design the objective function.\\n\\n3. Unclear interpretation of the visual results. It seems that the visual results showcase a similar pattern among the local and global factors, despite the difference that the signal is stronger for the local factors (which is evident as they play a more critical role in the objective). In the absence of a clear definition of disentanglement, more persuasive numerical results and interpretations are needed.\\n \\n\\n[1] Kipf T N, Welling M. Variational graph auto-encoders[J]. arXiv preprint arXiv:1611.07308, 2016.\\n[2] Xu, Da, et al. \\\"Generative graph convolutional network for growing graphs.\\\" ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2019.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"The paper tries to study unsupervised disentanglement learning for graph-level representations. In particular, it focuses on the complex process with global and local generative factors and proposes a VAE based learning algorithm, which argues achieving state-of-the-art performance on the task of unsupervised graph representation learning.\", \"review\": \"I think the idea of the paper is interesting. The writing is well and easy to read. However, it does not meet the condition of acceptance from my point of view. I have some concerns with its characterization of the literature.\\n\\n- Some important related work is missing. It seems authors ignore talking about some literature of unsupervised graph representation learning, such as [1], [2], etc. Also, they do not make a performance comparison with the methods above in experiments.\\n[1] Contrastive Multi-View Representation Learning on Graphs. ICML 2020\\n[2] Self-supervised Training of Graph Convolutional Networks. Arxiv 2020\\n\\n- Disentangling the global and local generative factors graph representation learning is important. However, the authors didn't explain the definition of \\u201cGlobal\\u201d and \\u201cLocal\\u201d factors clearly. It would also be better if they can show an example of global/local factors when generating graph.\\n\\n- The experiments are missing. I have some concerns as follows. What is the best number of generative factors which is important for this method? Can this method occur mode collapse and how to valid or prevent it? How can this method prove that each factor is necessary for the generative process? What is the real meaning of each factor? How about the time/space complexity of this method? More experiments or discussions should be conducted to answer these questions.\\n\\nBased on the above reasons, this paper can have much more improvement.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"The paper proposed GL-Disen, which is a disentanglement-based unsupervised method for graph representation learning.\", \"review\": \"In this paper, the authors proposed to disentangle the global level information from the local level one to reduce the effect of the irrelevant information. The proposed method outperforms several state-of-the-arts on multiple datasets for graph classification. Overall, I like the idea of applying unsupervised disentangled learning to graph level representation learning. Some concerns are on the experimental study and missing references.\", \"strong_points\": \"1. Disentanglement learning is a cutting-edge field and has gained much attention in recent years. It is true that global and local features often entangle together when we learn graph representations. The problem is real and important.\\n\\n2. The architecture of the model is easy to understand and reasonable.\\n\\n3. The experimental study is comprehensive, including both qualitative analysis and quantitative analysis. The experimental setup instructions and pseudo-codes are very clear, making the algorithm easy to be reproduced.\", \"weak_points\": \"1. Performing experiments only on graph classification tasks weakens the significance of the paper. It is common for graph representation learning methods to be tested on other tasks, such as graph similarity/distance computation and graph-level clustering, in order to draw a general and convincing conclusion.\\n\\n2. Some important references are missing. The authors should discuss and compare with them.\", \"on_graph_level_representation_learning\": [\"Bai et al. Unsupervised Inductive Graph-Level Representation Learning via Graph-Graph Proximity. IJCAI 2019.\"], \"on_disentangled_representation_learning\": [\"Yang et al. Factorizable Graph Convolutional Networks. NIPS 2020.\", \"Guo et al. Interpretable Deep Graph Generation with Node-edge Co-disentanglement. KDD 2020.\", \"3. The paper mentioned that the global and local latent generative factors are sampled from their respective posterior distributions. More details are expected.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"The authors proposed a disentanglement learning based approach for unsupervised graph level representation learning, which aim to capture the global and local latent factors.\", \"review\": \"In this paper, the authors proposed a disentanglement learning based approach for unsupervised graph level representation learning. They assume that disentangled representations which capture these global and local generative factors into independent latent units can be highly beneficial for graph level tasks. The extensive experiments and analysis show that our method achieves the state-of-the-art performance on the task of unsupervised graph representation learning.\\n\\n===========\", \"strengths\": \"1. The paper is well written and the disentangling factors can benefit the unsupervised graph representation learning.\\n2. The performance of this work is good compared with the state-of-the-art baselines. The source code is also available.\\n3. The related work is sufficient to understand the motivation of this work. The \\n\\n=====\", \"weakness\": \"1. The idea is not very novel. For example, two important assumptions 1) a global and local factor for graph analysis 2) local latent factors are independent. \\nThose two assumptions actually have been explored in unsupervised learning tasks. For example, the follow paper[1] exactly disentangle local and global information into two separate sets of latent variables within the VAE framework. It seems that migrating this idea under graph is straightforward. The paper is more like a mixture of [1] and (Higgins et al., 2017), and GCN\\n\\n[1] Charakorn, Rujikorn, et al. \\\"An Explicit Local and Global Representation Disentanglement Framework with Applications in Deep Clustering and Unsupervised Object Detection.\\\" arXiv preprint arXiv:2001.08957 (2020).\\n\\n2. In Figure4, It seems that the GL-Disen global has very good accuracy. The GL-disen global-local combines only outperform GL-Disen global within a very small range of \\\\lambda but with large fluctuation. Does that mean the local factor contribution little to the overall performance?\\n\\n\\nIn Conclude,the authors propose a VAE based learning algorithm to disentangle the global graph-level information. The overall presentation is good. The similar ideas have been explored in unsupervised learning. The novelty of this work is thus not very impressive.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
K5j7D81ABvt | Disambiguating Symbolic Expressions in Informal Documents | [
"Dennis Müller",
"Cezary Kaliszyk"
] | We propose the task of \emph{disambiguating} symbolic expressions in informal STEM documents in the form of \LaTeX files -- that is, determining their precise semantics and abstract syntax tree -- as a neural machine translation task. We discuss the distinct challenges involved and present a dataset with roughly 33,000 entries. We evaluated several baseline models on this dataset, which failed to yield even syntactically valid \LaTeX before overfitting. Consequently, we describe a methodology using a \emph{transformer} language model pre-trained on sources obtained from \url{arxiv.org}, which yields promising results despite the small size of the dataset. We evaluate our model using a plurality of dedicated techniques, taking syntax and semantics of symbolic expressions into account. | [
"symbolic expressions",
"dataset",
"informal documents",
"task",
"informal stem documents",
"form",
"files",
"precise semantics",
"abstract syntax tree"
] | Accept (Poster) | https://openreview.net/pdf?id=K5j7D81ABvt | https://openreview.net/forum?id=K5j7D81ABvt | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"Hgpzs-wiXbA",
"kNauGZv-oy",
"592UDSGt9RI",
"qruAZHyuM1",
"DxEgBK1je1G",
"iUH6-XPB-ba",
"p1YmeznyLg3",
"RaS8YFh7eqh",
"KNDWBvF4u8"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040448639,
1605551648675,
1605551408918,
1605551117301,
1605550918663,
1604312130230,
1604290161313,
1604018944949,
1603442341718
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3399/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3399/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3399/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3399/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3399/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3399/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3399/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3399/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Poster)\", \"comment\": \"This paper tackles the task of translating informal LaTeX\\nmath into a formal representation annotated with abstract concepts (sTeX /\\nSMGloM). The authors build a synthetic training data generation mechanism,\\nand construct an evaluation dataset by hand. The problem is tackled as machine\\ntranslation, and vanilla systems fail, while GPT-2 pretrained on LaTeX documents\\nperforms well. The reviewers recognize the importance of this work, in an area\\nwhere data is not plentiful and benchmarking is difficult. The authors do a\\ngood job in presenting a difficult topic rather clearly, but I would encourage\\nthe authors to continue improving the presentation, possibly with clearer examples\\nor figures. The particular \\\"copying bias\\\" useful in this task, pointed out by a\\nreviewer, is indeed interesting and I encourage the authors to consider that\\ndiscussion and the thoughtful reviews deeply. Overall, this is a significant\\ncontribution to the field and I recommend acceptance.\"}",
"{\"title\": \"Our reply\", \"comment\": \"You seem to be mostly interested in a better presentation and examples for the precise input/output of the model. One such example can be found in Section 6 (on page 7) - could you be more specific in what you want to see? Given that in the final version we have an extra page, we can happily add more informative examples.\\n\\n> For example, there is lots of semantic parsing research on transferring text into SQL queries (the opposite direction of the current problem) or on solving textual mathematical problems. Can any ideas be borrowed from this literature or is it only vanilla MT that can be applied ?\\n\\nAs far as we know, the projects mentioned use purely \\\"classical\\\" algorithmic approaches for (semantic) parsing, because in those settings, these are sufficient. For example, the projects on solving mathematical problems (as far as we know) all focus on specific mathematical domains (e.g. real arithmetics), where the meaning of symbolic expressions is (assumed to be) unambiguous anyway or can be easily inferred from purely syntactic usage (e.g. \\\"+\\\" is always assumed to be addition on real numbers), so that the expressions can simply be parsed deterministically. A paragraph to that effect can be found at the bottom of the state of the art section (page 4).\"}",
"{\"title\": \"Our reply\", \"comment\": \"> In the start of section 4, a number of systems were listed. Each of these was an attempt to automate the formalization process, but there was no attempt to compare against these methods.\\n\\nA direct comparison of our approaches with autoformalization is not possible, as autoformalization is a translation to a fixed logic, whereas our translation still allows the logic used in the target language to be arbitrary. As such the tasks are different. In fact autoformalization is useful for translation to the languages of proof assistant systems, whereas what we do here is useful for computer algebra systems and similar mathematical knowledge management and interchange tools. In particular (as mentioned in the state of the art section, page 4), we deliberately do not want to change the presentation of informal natural language fragments, which those other projects translate to logical statements in a formal language.\\nConsequently, a comparison between ours and autoformalization projects would require both a shared evaluation dataset (see our reply to AnonReviewer1 regarding the difficulties there), and aligning the purely symbolic expressions in the informal inputs with their corresponding counterparts in the translated fully formal outputs of the autoformalization models - which requires an extreme amount of manual work and expertise in both the mathematics involved as well as the formal system we compare our approach to. Consequently such a comparison is currently not feasible.\\n\\n> As noted towards the end of section 4, the target transformation for much of the document is the identity. Given that actually a lot of this text should not change, is phrasing it as translation the best choice?\\n\\nIt seems to us that machine translation is the established method that comes closest to what we are trying to accomplish, and all of our experiments were guided by us considering our task as a machine translation task, if only for a lack of alternative approaches. We agree that from the point of view as an NMT, our task is somewhat peculiar, which is why we explicitly discuss those peculiarities in detail in the paper.\\n\\n> a lot of training is necessary to prime the model to learn this identity transformation, and that data is not given to those models.\\n\\nI am not entirely sure what is meant by \\\"that data\\\". It would have been possible (and we considered this) to pretrain NMT models to learn the identity on plain LaTeX fragment first, but we assumed that to \\\"unlearn\\\" the identity afterwards on symbolic expressions would be no less difficult for a such initialized model than to train a randomly initialized model in the first place - especially since learning the identity does not in fact teach the model anything with respect to the semantics of the sentences, which is what it needs to learn for correctly disambiguating from document context.\\n\\n> if automated methods produce the dataset, which is then used to train the model, then why are those methods not sufficient for the end task?\", \"we_produce_two_parts_of_the_dataset_via_automated_means\": \"We generate plain LaTeX from existing sTeX documents, and we synthesize training data by generating random sTeX and then translating that to plain LaTeX. In both cases, the easy step is to translate sTeX to plain LaTeX, which can be easily done by deterministic algorithms (it amounts to just expanding sTeX macros). The hard part that our paper addresses is the reverse: Generating (correct) sTeX from plain LaTeX, which requires some amount of document comprehension.\\n\\n> It was also bizarre in the results section how the baselines were dismissed in writing, their results were never presented.\\n\\nThe results were entirely nonsensical due to the lack of training data. As mentioned, our experiments with established NMT models did not even produce syntactically valid LaTeX, so presenting these would not have been informative (examples include e.g. sequences of closing braces, or ungrammatical concatenated substrings of the training data with no relation to the current input).\\n\\n> If the baselines are truly that bad, then do they suffice as baselines?\\n\\nWe agree that they are not meaningful baselines, but they are the only thing we could compare our model to at all. We would happily compare our model to autoformalization projects instead, once we have a compatible evaluation dataset, but as mentioned there is a large amount of work required in order to do so, for which we want to significantly simplify the data generation workflow first.\\n\\n> These are all things that are outside the scope of the typical ICLR paper and thus warrant a clear introduction, but space is limited.\\n\\nIn deed, the amount of introduction required is a problem given the page limit. Since in the final version we would have an additional page of space, suggestions on which parts of the paper should be expanded most would be very welcome.\"}",
"{\"title\": \"Our reply\", \"comment\": \"> We do need a larger and high-quality evaluation set to validate any actual progress on this problem.\\n\\nWe fully agree. A larger evaluation set using the standards applied in the paper is (currently) a significant amount of work though, and resources are unfortunately very limited - especially now that the funding period for this project is over. Our criteria for the evaluation set were: \\n1. Unlike the plain LaTeX side of the training set, it should be entirely written by hand to avoid bias, \\n2. (Most, but ideally) all symbols occurring in the evaluation set should be aligned with a strongly typed library in order to allow for synthesizing training data (which we otherwise lack for most mathematical domains), and \\n3. The evaluation set should contain multiple symbols with the same presentation, so that non-trivial disambiguation becomes relevant - in our case primarily arithmetic operations on different domain sets (naturals, integers, reals, etc.).\\n\\nExtending the evaluation set, in particular to cover more mathematical topics, would be extremely desirable, but requires a lot of work and a non-trivial amount of expertise both regarding sTeX and the SMGloM as well as MMT and its formal libraries.\\nWe have ideas and are actively working on reducing the amount of effort involved in this though, so we hope that this will become more feasible in the not too distant future.\\n\\n> From what I understand, our best evaluation protocol should be checking if S_F belongs to STEX(S_STEX), which is not used in this work. Is there a way to implement this protocol?\\n\\nThe problem here is that STEX(S_STEX) would be the set of all sTeX fragments that are correct full disambiguations of S_STEX (which is by definition already correctly fully disambiguated) - in other words, it is the set of all semantically equivalent sTeX-disambiguations. This set is necessarily not computable, so it's unclear how we could check that directly. However, the protocols we *did* implement are intended to be reasonable approximations of this, primarily \\\"provided_stex\\\" (i.e. string equality of S_F and S_STEX), \\\"stexcheck\\\" (i.e. S_F is fully disambiguated) and \\\"stex_as_omdoc\\\" (i.e. equality of syntax trees after translation to a strongly typed setting), the first and third of which do actually imply S_F\\\\in STEX(S_STEX).\"}",
"{\"title\": \"Our reply\", \"comment\": \"> Question: S_F = S_sTEX means exact string equality or after white space normalization, etc? If so can you say what are exactly the normalizations and what is the success rate before and after them?\\n\\nIt is equality *after* normalization. Since normalization was applied to the training data, and is applied to a document *before* being input into the algorithm, the output can be expected to be (close to) normalized anyway. Therefore we don't consider it informative to evaluate the algorithm without normalization.\\n\\nThe precise normalization can be found in latex/src/main/scala/com/fifom/latex/Normalize.scala (definition of \\\"val cleanups\\\"). \\\"Remove\\\" removes macros entirely (e.g. \\\"Remove(semicolon)\\\" removes \\\\; macros), \\\"Modifier\\\" replaces macros with other tokens (e.g. Modifier(mid,...) replaces \\\"\\\\mid\\\" by a simple \\\"|\\\" character token). Whitespaces get reduced to a single space token during parsing automatically (as TeX would do it).\\n\\n> Would larger GPT models help?\\n\\nthe current experiment was performed on a relatively small set of sTeX symbols, as occuring in the (relatively short) evaluation dataset. For that, the size of the model probably does not make much of a difference. Once we scale up the (mathematical) domain of the model, it seems plausible that the size of the model would have a more pronounced impact.\\n\\n> Would unsupervised learning like in Wang 20 be useful in some context here?\\n\\nMy impression is that unsupervised methods (by and large) require more training data than supervised ones. So likely yes, assuming we can get more sTeX data (with a broader range of sTeX symbols) in the future. Relying on synthesized data alone for unsupervised methods seems unwise to me.\"}",
"{\"title\": \"Important step in autoformalization bringing in good tools\", \"review\": \"The paper presents a dataset for autoformalization (semantic\\ndisambiguation) of informal Latex STEM documents. It is based on the\\nconsiderable amount of work that has been done in the last decade on\\nflexiformal (semi-formal) language formats and tools such as OMDoc,\\nOpenMath, sTeX and LaTeXML. The SMGloM glossary and the MiKoMH\\nrepository are used as parallel sources, and the MMT system connecting\\na number of formal systems and foundations is used for data\\naugmentation.\\n\\nThese are still relatively small datasets, so custom pretraining of\\nGPT-2 is done on the full arxiv corpus. The pretrained model is then\\nfine-tuned on the smaller training data. Multiple evaluation metrics\\nthat are meaningful in the semantic setting are defined - some of them\\nsimilar to those used in Wang et al 18 and Wang et al 20.\\n\\nThe final success rate of 47.2% of test data predicted correctly looks\\nvery good and is comparable with the results of Wang18/Wang20 on the\\nsynthetic data obtained by informalizing Mizar.\\n\\nMy overall impression is that this is an important step in the\\nautoformalization program [1]. It has involved a lot of work and brought\\nin a range of important tools developed recently.\", \"some_detailed_remarks\": \"\", \"p5\": \"def 4.1 \\\"We call S \\u2208 L fully disambiguated\\\"\\n==>\\nI would not call the text fully disambiguated without types of\\nvariables. In systems with subtypes (e.g., Mizar, possibly also other\\nPAs with typeclasses) the meaning and provability of a statement\\n(e.g., \\\"forall x exists y st x = y *_complex y\\\") will change depending\\non whether the quantification is over complex, real, rational, integer\\nor natural numbers.\", \"p7\": [\"Question: S_F = S_sTEX means exact string equality or after white space normalization, etc? If so can you say what are exactly the normalizations and what is the success rate before and after them?\", \"Would larger GPT models help?\", \"Would unsupervised learning like in Wang 20 be useful in some context here? The unsupervised methods seem to have improved a lot recently.\"], \"references\": \"[1] Cezary Kaliszyk, Josef Urban, Jir\\u00ed Vyskocil, Herman Geuvers:\", \"developing_corpus_based_translation_methods_between_informal_and_formal_mathematics\": \"Project Description. CICM 2014: 435-439\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"An interesting work for autoformalization\", \"review\": \"This paper proposes a new task, disambiguating an informal math expression in LATEX by associating its tokens with concepts in a predefined formal math library and determining its abstract syntax tree. As argued in the paper, I agree that this task could serve as an important step for autoformalization, which is one of the most important problems of formal reasoning.\\n\\nThe task setup is reasonable. LATEX is commonly acceptable to be the informal language for editing math expressions. STEX and SMGLoM are powerful tools to annotate LATEX expressions with formal concepts. By advancing on this problem, we can greatly reduce the workload of autoformalization. \\n\\nThe drawback of the current benchmark is the lack of training and evaluation data. I think the lack of training corpora may be addressed by pretraining and building synthetic data. We do need a larger and high-quality evaluation set to validate any actual progress on this problem. The current evaluation set is too small and covers limited math topics. Also, the evaluation protocol is quite unclear. From what I understand, our best evaluation protocol should be checking if S_F belongs to STEX(S_STEX), which is not used in this work. Is there a way to implement this protocol? \\n\\nThe proposed approach looks fine. It is better to have an ablation study on the corresponding contributions of pretraining and synthetic data.\\n\\nIn general, I think this paper proposes an important task. By building a larger evaluation set and figuring out a clear evaluation protocol, this could be an important benchmark for the AI/TP community. \\n\\n=======================================================================\\nAfter reading other reviews and authors' responses, I upgrade my score to 6. Despite its relatively small evaluation data, I think the setup of the task of autoformalization could still contribute to the community and inspire more researchers to make efforts in this direction.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Important line of work hindered by little methodological novelty and poor evaluation\", \"review\": \"#### Summary:\\n\\nIn more mathematical fields, theorem provers and similar systems can validate claims made about formal systems. However, many research contributions come in the form of papers, and thus they are never validated in this way. Math researchers can express their contributions in a special purpose language to do this, but that places an additional burden on them to learn this skill.\\n\\nAn alternative would be to \\\"translate\\\" the research into formal languages which could be operated on by automated systems. This seems to be the goal of this paper, which looks at using translation to disambiguate some expressions from STEM documents written in LaTeX, which maps it into an sTex document. Previous research in this area used more hand-specified transformations, and was evaluated on different data. This makes this work closely related to existing work, but not directly comparable.\\n\\nThe main finding of this work is that pre-trained transformer models outperform more traditional fully-supervised translation systems on this task. It is difficult to guage precisely to what extent the proposed method solves the task, or to fully grasp what aspect of the translation problem is being solved.\\n\\n\\n\\n#### Strong points: \\n\\nThe proposed approach of using transformers and large in-domain pre-training is similar to a lot of recent work which has shown to work well in practice, and is therefore well-motivated. The task itself is important and improvements in this area could have broad range of impact that would even be good to the ML field itself, so even a practical improvement using fairly standard ML would be a useful contribution.\\n\\nThe authors are clearly knowledgable on the topic, and discuss and cite a great deal of the literature and libraries relevant to the problem. It's a heavy paper -- there's a very extensive set of work that's been studied and referenced.\\n\\n\\n#### Weak Points\\n\\nIn the start of section 4, a number of systems were listed. Each of these was an attempt to automate the formalization process, but there was no attempt to compare against these methods.\\n\\nAs noted towards the end of section 4, the target transformation for much of the document is the identity. Given that actually a lot of this text should not change, is phrasing it as translation the best choice? It seems that phrasing it this way sets up the NMT baselines to perform poorly, since a lot of training is necessary to prime the model to learn this identity transformation, and that data is not given to those models.\\n\\nThe evaluation methodology is confusing. For instance, it seems some of the data is generated via an automated procedure, both in the supervised learning (Section 5) and in the Synthesizing Training Data section. It makes it difficult as a reader to understand why this is not a chicken-and-egg type of scenario: if automated methods produce the dataset, which is then used to train the model, then why are those methods not sufficient for the end task? This may be a problem that arises from introducing so many domain-specific libraries and formalisims, that it leaves the reader with a great deal of difficult to understand precisely what the transformation is accomplishing and from what type of data.\\n\\nIt was also bizarre in the results section how the baselines were dismissed in writing, their results were never presented. If the baselines are truly that bad, then do they suffice as baselines? The authors choose these instead of the existing formalization methods, so why make the contrast with methods that are not in a position to perform the task well?\\n\\n\\n#### Recommendation\\n\\nBecause the presentation makes it difficult to fully grasp the problem setting, precisely what is being learned, precisely what is failing, it is difficult to recommend the paper for acceptance. It is actually very understandable that this particular paper has this problem, because the authors are forced to introduce many unfamiliar concepts -- the problem setting, the types of formalisms used, the libraries used in creating the data, etc. These are all things that are outside the scope of the typical ICLR paper and thus warrant a clear introduction, but space is limited. I could easily imagine this paper filling up 12-14 pages just with the same content presented here. But ultimately the paper is not written in a way that can properly convey the scope of the work and narrow in on precisely the targetted problem and why it's difficult and important.\\n\\nThen the experimental section is quite short and lacks important comparisons. Given the lack of suitable baselines, I would not be able to recommend accepting the paper without real comparisons to other work in this area. Again this could be a space concern, but the paper overall spends too much time leading up to methodology/experiments, and then is very light on actual experimental content. Factor in that the model is used in a very off-the-shelf way, and doesn't treat the problem setting really any different than a standard translation task, it is hard to see real novelty in the modeling contribution either.\\n\\nOverall I think the work is promising, but it is far too rough in its current state to be considered for acceptance without significant revision. It would need major restructing and refocusing, more experiments, and more analysis.\\n\\n\\n#### Presentation\\n\\nI feel like there's a lot of domain specific meanings to terminology that makes it more difficult than necessary to understand by a general ML audience. Take for instance, formal and informal. To most language users, a scientific paper is a formal document -- it uses formal language. So it takes me some time as a reader to get into the actual data section and understand truly what is meant by informal here. There are many things of this nature that would be better to clarify up-front, so the reader with the typical ML background and biases doesn't carry around incorrect concepts of what the paper is about, for longer than is necessary.\\n\\n\\nThe citation format is incorrect.\\n\\nSmall typos throughout.\\n\\n\\n#### In considering author response:\\n\\nThank you to the authors for continuing discussion on the points raised in my review, and for further clarifying the nature of the data as a kind of unidirectional ambiguity problem. I understand this better now and can see a contribution in releasing this data / data-generating process for other researchers studying autoformalization. On account of this I'm going to raise my initial scoring.\\n\\nOn the subject of methodology, I still think there are reasons to reconsider this work. As discussed, the translation baselines were not great. I think it's not really fair to compare those models without pre-training on data that was too small to learn basic tree properties. It is possible that translation models that perform string-to-tree translation would perform better here(1), though results from natural language translation would hint towards the pre-trained models still performing better. Translation models used in the domain of programs seem more suitable as well, and there's a good number of these, and there is a natural desire to generate strings that reflect a properly nested tree (2). There is also work on mapping strings to knowledgebase queries that seems similar in input/output (IIRC, Luke Zettlemoyer's had a number of important papers in this line).\\n\\nBut at best these would still be comparisons of mostly off-the-shelf translation models, which doesn't leave the reader with much of a takeaway.\\n\\nSo I'm left feeling that if the authors want a useful quantitative comparison, these methods should be explored. Pre-trained model beats model trained on only in-domain data is not to me a story significant enough to warrant inclusion in the conference, even if it contributes a new dataset (as the modeling is presented as a contribution here). Even off-the-shelf methods can of course be part of an important contribution when the authors show that they have pushed the field further with an important result (say, GPT) but I do not feel the evaluation in this case supports that conclusion.\\n\\nIt seems more natural given that none of these methods are likely to out-perform a vanilla pre-trained translation model, that the problem description and qualitative evaluation are of the utmost importance. I would really recommend expanding this beyond half a page, to give the reader a better idea of what problems are solved and what are remaining. It also seems that some of the errors pointed out (like those involving ellipses) would likely be remedied by additional synthetic data. As I'm the most dissenting reviewer, I would still hope the authors attempt to improve the results section with the additional page upon acceptance.\\n\\n1.\\nTowards String-To-Tree Neural Machine Translation\\nRoee Aharoni, Yoav Goldberg\\n\\n2.\\nTree-to-tree Neural Networks for Program\\nTranslation\\nXinyun Chen, Chang Liu, Dawn Song\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"A thorough paper about style transfer: From Latex math expressions to less formal descriptions. A thorough work, but the problem definition can be clearer and connection to previous work can be better made.\", \"review\": \"This paper addresses a variant of the style transfer problem - that is, transferring formal latex expressions to less formal descriptions that can be followed by mathematician.\\n\\nThis is a paper that formulates a new task, provides a dataset for the task and tests initial approaches for solving it. It is important to take this paper type into account when reviewing it - I am not expecting a very creative solution, or very strong results at this stage. What is important to me when reviewing such a paper is to see an accurate and interesting problem definition, an appropriate dataset, modeling and experiments that demonstrate the challenge of problem and of the evaluation (if evaluation is indeed challenging) and proper awareness of previous work.\\n\\nIt will be most straight forward for me to review the paper by listing its pros and cons.\", \"strong_points\": \"1. The problem exposition is through and clear, and the introduction surely provides good motivation for the problem.\\n\\n2. It is clear that the authors are expert on the subject matter. That is, they are deeply familiar with the problem and with directly related previous word (that is, previous work that addressed this very problem or close variants).\\n\\n3. The authors propose a new dataset that is likely to be useful for the community of researchers that work on this problem.\\n\\n4. The paper proposes an algorithmic approach for the problem, tests it in experiments with the new dataset and the authors are aware of potential challenges in the evaluation and try to address them.\", \"weak_points\": \"1. The problem definition was not clear to me. I surely understand the general idea but I am missing a concrete example that demonstrates what exactly an algorithm for the problem gets as input and what is its output.\\n\\n2. I had a similar problem with the description of the dataset. Yes, there is a formal description (just as there is a formal description of the task), but the lack of examples leaves the description at a very abstract level - I could not understand what exactly should be expected in the dataset.\\n\\nI should note that 1+2 makes it harder to evaluate the results and to evaluate the appropriateness of the evaluation.\\n\\n3. The authors does not show awareness of work in semantic parsing and in style transfer. These works are very important both for the algorithmic approach and for understanding the challenges of evaluation (e.g. plurality). For example, there is lots of semantic parsing research on transferring text into SQL queries (the opposite direction of the current problem) or on solving textual mathematical problems. Can any ideas be borrowed from this literature or is it only vanilla MT that can be applied ? As said above, I am aware this is a paper that introduces a task, but part of this introduction should be, I believe, the connection to relevant ideas and approaches. \\n\\nOverall, despite the challenges, I think the paper can contribute an interesting and practical new task to the research community - both at the task definition level and in providing an actual dataset. I recommend that the authors try to solve the above issues in the final version, but I am leaning towards acceptance.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
uDN8pRAdsoC | Hard Masking for Explaining Graph Neural Networks | [
"Thorben Funke",
"Megha Khosla",
"Avishek Anand"
] | Graph Neural Networks (GNNs) are a flexible and powerful family of models that build nodes' representations on irregular graph-structured data. This paper focuses on explaining or interpreting the rationale underlying a given prediction of already trained graph neural networks for the node classification task. Existing approaches for interpreting GNNs try to find subsets of important features and nodes by learning a continuous mask. Our objective is to find discrete masks that are arguably more interpretable while minimizing the expected deviation from the underlying model's prediction. We empirically show that our explanations are both more predictive and sparse. Additionally, we find that multiple diverse explanations are possible, which sufficiently explain a prediction. Finally, we analyze the explanations to find the effect of network homophily on the decision-making process of GNNs. | [
"Interpretability",
"Graph Neural Networks",
"Hard Masks"
] | Reject | https://openreview.net/pdf?id=uDN8pRAdsoC | https://openreview.net/forum?id=uDN8pRAdsoC | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"hoCsUivYwul",
"90aVzcJWpHO",
"XiVtSac-2Vj",
"-_aPcFdaok3",
"Vne1nN6insa",
"5X9fwVTmHTP",
"5AHxOpk3aJ",
"nj1248dQWWG",
"foRQOofAM41",
"KaUa_fxo27w"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040446888,
1606076086570,
1605859598573,
1605780061303,
1605779190484,
1605779048692,
1605778957332,
1603906136239,
1603746448066,
1603721964938
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3398/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3398/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3398/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3398/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3398/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3398/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3398/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3398/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3398/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"The paper provides a simple approach to explaining GNN predictions for each node by greedily selecting nodes or features in each computation graph so as to increase the fidelity score. The fidelity score is based on comparing the original GNN output to what is obtained with noisy versions of the masked nodes/features. While simple, the approach seems somewhat inefficient (efficiency should be assessed/characterized). Also, several improvements to the evaluation expressed in the reviews/discussion (e.g., human evaluation, practical utility, comparison to gradient based methods) would make the submission somewhat stronger.\"}",
"{\"title\": \"re: re: response to reviewer 3\", \"comment\": \"Thanks for your reply.\\n\\n**Computational Graph:** We hope the following changes will clarify your remaining concerns about our formulation of the computational graph: \\n\\nWe note that for a particular node, $n$ the subgraph taking part in the computation of neighborhood aggregation operation, fully determines the information used by GNN to predict its class. In particular, for a $L$-layer GCN, this subgraph would be the graph induced on nodes in the $L$-hop neighborhood of $n$. We will call this subgraph the *computational graph* of the query node. We would like to point out that the term computational graph should not be confused with the computational graph of the neural network.\\n\\n**Efficiency:** We now include in the Appendix D (Figure 12) the runtime of our approach. After the initialization, we only need few seconds to retrieve additional elements of the explanation. \\nDuring the creation of the paper only very limited methods (GNNExplainer) were published, which allow to explain GNNs. Please note that a comparison on gradient-based methods like Integrated Gradients, GradCam, LRP, DeepLift etc is not trivially possible since their extension to variable-size computational graphs is not well understood for node classification tasks (to the best of our knowledge until the submission of this manuscript). We however compare it with simple gradients that was also used in GNNExplainer.\\n\\n**Faithfulness:** We first agree that there is no one perfect way to evaluate the quality of explanations. We agree with the reviewer that one cannot be 100% certain if the selected features/nodes are INDEED the explanation. In that sense, there is no fool proof way to check the REAL fidelity of an explanation. However, we can approximate the actual fidelity by expected fidelity (that we propose in this paper). This notion of expected fidelity is based on information theoretic interpretations that means -- \\\"if the explanation is highly predictive in expectation, then it is a high qualitative explanation\\\". Since we fully agree with the reviewer, we will reflect this in our paper where our claims are qualified with caveats. If it makes more sense we can call our measure \\\"expected fidelity\\\". To avoid any confusion, we have rewritten the respective paragraphs with faithful and removed the term.\"}",
"{\"title\": \"re: response to reviewer 3\", \"comment\": \"Dear authors,\\n\\nThank you for your response. I have looked at the updated submission and appreciate the improvements I see there. For instance, adding the background on GNNs is beneficial. Here, however, I would also like to press a bit more the issue of making clear what you mean by \\\"computation graph.\\\" You write in section 3.2 \\n\\n> We note that the computational graph of a node n operation as specified by neighborhood aggregation operation, see Eq. (1), fully determines the information used by GNN to predict its class. In particular, for a L-layer GNN, the L-hop neighborhood of n will constitute its computational graph.\\n\\nBut a computational graph has as nodes usually operations (or a collection of operations, summarized as a \\\"layer\\\"). In your GNN definitions, you use (\\\\ell) for the depth. Intuitively, \\\\ell here correspond to nodes (layers) in the computation graph, no? So if that's not what you mean (and that's what I understand), you might not want to call the L-hop neighborhood the computation graph. It's (as far as I understand it) the subgraph of the input graph that is induced by all nodes that partake in the computation graph for a particular node. This is subtle but it might be confusing for some readers if you equate the L-hop neighborhood of a node and the computation graph that a GNN induces for a particular node. I hope I am making sense.\\n\\nRegarding the efficiency. Sure, you didn't investigate this since you focused on effectiveness. My point is that you should actually look at the efficiency of your approach and include experiments here. What you do is essentially a form of a perturbation based local explanation method. These are known to be expensive. A comparison in runtime to gradient based methods (e.g. integrated gradients) would be nice. \\n\\nLastly, I still do not agree with the way that you use the term \\\"faithfulness\\\". Your method generates local explanations. There are multiple issues with local methods. In fact, in your experiments you find that there are multiple alternative ways to explain the same prediction. The notion of faithfulness is a strong one. It says that \\\"exactly the presence of these features/nodes caused the model to behave that way\\\". It is very difficult if not impossible to achieve this with local methods. So, I would ask you to not equate fidelity and faithfulness but to consider that fidelity is but one way of measuring whether local explanations are good at maintaining the original behaviour of the model.\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"Thanks for your comments. For the points raised we have the following response:\\n1) We have now added an abstract formulation of GNNs in Section 3.1. In Section 3.2, we have improved the description of the used computational graph. We have made the observation more explicit that for a k-layer GNN, the information used to create a prediction is contained in the k-hop neighborhood of the query node.\\nS Is the set of the selected set of nodes (from the computational graph) and features as an explanation. We have added additional information about the notation in Section 3.2 and also updated the pseudocode accordingly. In addition, we added a table in appendix A with all details (repeated) of the used notation. \\n2) Since this paper's focus was on showing the effectiveness of our approach, we did not invest in making ZORRO efficient. Our methods' computational complexity depends on the number of samples we use to estimate the fidelity. We can make our experiments arbitrarily faster by reducing the number of samples. We also intend to use batching and reusing samples to further improve Zorro's efficiency in the future. We have added a section in the appendix stating the reasoning for our design choices, which already make our current implementation reasonably efficient. We have added complexity analysis in the current version of the article.\\n3) Your suggestion is akin to using Zorro as a feature selection method to train a sparse model. And if the sparse model makes the same prediction as the original GNN, then the explanation is correct. If we remove the nodes and re-train them, we would end up with a different GNN to measure fidelity. Be that as it may, different nodes have different explanations. A node could be present in one explanation but masked-out in another. This complicates building a valid graph from the explanation masks let alone training. \\nBut following your suggestion, we retrieved the explanations for all training nodes of the GNN on Cora and selected the top k features, which were most often in the first explanation. Similarly, we retrieved all explanations with GNNExplainer and selected the top k features with the highest summed feature mask values. The results are the following (see Section 4.4):\\n| Method | k=1 | k=10 | k=50 | k=100 |\\n|---------------------|-------|------|------|-------|\\n| ZORRO ($\\\\tau= .85$) | 0.24* | 0.50* | 0.71* | 0.77* | 0.78* |\\n| GNNExplainer | 0.15 | 0.21 | 0.35 | 0.54 | 0.66 | \\n(Explained GCN: 0.79, * marks highest value)\\n4) We have now added experiments on synthetic graphs where ZORRO outperforms baseline. Our explanations also highlight the fact the GNN may not require all nodes of the ground truth. To show this, we have added some example explanations in the appendix. \\nWe believe that fidelity, as defined in the paper, is a more realistic measure of the goodness of explanation as it measures how using the explanation alone leads to the same model decision. Despite that, for synthetic datasets, we use measures other than fidelity as also used by GNNexplainer. Our method also outperforms GNNexplainer under those measures.\\n| Method | #Nodes | Recall | Precision | Accuracy |\\n|--------|--------|--------|-----------|-----------|\\n| ZORRO ($\\\\tau=0.85$) | 2.48 | 0.35 | 0.94* | 0.90* |\\n| ZORRO ($\\\\tau=0.98$) | 5.42 | 0.50* | 0.90 | 0.90 |\\n| GNNExplainer | 5.34 | 0.35 | 0.33 | 0.79 |\\n(* marks highest value, all average values)\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"Thanks for your review. We discuss the raised weaknesses consecutively and have numbered them to be precise:\\n\\n1) The masked-based interpretation methods alluded by the reviewer are predominantly for models that use masking during model building \\u2013 like in Language (Lei et al. 16, Bastings et al. \\u201819, Invase ICLR\\u201919). We are crucially different in that we operate only on the trained model \\u2013 the post-hoc setting. We have no access to the parameters of the trained model. We only assume that we have access to the final prediction. We also accept that there are parallels to the above masking methods. However, where we crucially differ from existing \\u201cmasking methods\\u201d (although NOT post-hoc) is that we measure the fidelity in expectation by sampling multiple instances. This provides our fidelity estimates necessary robustness that is missing in earlier works. The rate-distortion theory merely provides a theoretical framework on which we ground our fidelity measure. \\n\\n2) Our method is a local interpretability method. We explain the class label predicted for a single node. For the explained node, we currently use the same feature mask for all selected neighbors to reduce the explanation's complexity. It would be easy to extend our method such that we search for each node in the computational graph its feature mask. \\n\\n3) In short, we do not recalculate the orderings RF and RV because we did not observe any performance gains from doing so and initializing them already has the highest impact on our runtime. We have added a section in the appendix (see Appendix A) explaining the reasoning for our algorithm's design choices.\\n\\n4) The contribution is not merely the greedy algorithm, which is simple, as the reviewer rightly points out. Our major contribution is proposing a principled framework under which the validity of GNN models' explanations can be measured. In doing so, we intend to enrich the underexplored area of GNN explainability by proposing our fidelity measure. Also, the insights from our \\\"simple\\\" approach (1) already showcase the limitations of existing approaches [Section 4.1], (2) is empirically superior to the existing approach [sections 4.2, 4.4, 4.5], and (3) shows how explanations can be used to inspect trained models [Section 4.3]. These utility experiments, to the best of our knowledge, have completely been missing in the literature. \\n\\n5) We have rewritten the description of our algorithm to make it easier to understand. In addition, we have added a lengthy description of all details in Appendix A. \\n\\n6) We now include some example explanations for the synthetic dataset. See Figure 13 and Figure 14 for correctly and wrongly predicted nodes.\"}",
"{\"title\": \"Response to Reviewer #2 (2)\", \"comment\": \"E) Our goal of choosing different values of $\\\\tau$ is to show that our method can retrieve small-sized explanations at a higher value of $\\\\tau$ too. In principle, $\\\\tau$ is a hyperparameter that a user can specify. The higher the value of $\\\\tau$ better the explanation explains the model behavior. By construction, our approach will output explanations with fidelity higher or at least equal to $\\\\tau$. Choosing the desired fidelity is crucial. We aimed to illustrate in our experiments with different $\\\\tau$ that for higher fidelity, the explanation size may not necessarily increase. A point to be noted here is that for some nodes, we found that GNNexplainer obtains a fidelity of 1, but a close inspection showed us that feature mask values are so distributed that each feature obtains almost the same importance, implying that the evaluation on soft masked explanation allowed the model to use nearly all of the features. That is why we emphasize checking the change in the size of our explanations for high fidelity (high $\\\\tau$) cases.\\n\\nF) Since this paper's focus was on showing the effectiveness of our approach, we did not invest in making ZORRO efficient. Our methods' computational complexity depends on the number of samples we use to estimate the fidelity. We can make our experiments arbitrarily faster by reducing the number of samples. We also intend to use batching and reusing samples to further improve Zorro's efficiency in the future. For the new experiments on the synthetic dataset, we recorded on average a runtime way below a minute for each explanation. Since our runtime only depends on the number of nodes in the computational graph, the number of features and the number of samples, the runtime does not necessarily increase with the size of the graph.\\n\\nG) We have rewritten section 4.2 to make our point clearer: To show that Zorro, in fact, finds smaller explanations than GNNexplainer, we compare the entropy over the normalized feature mask distribution. For Zorro, we have a binary mask, which implies that Zorro's entropy will be equal to the log (number of selected features). Lower entropy here means a smaller number of selected features. \\nIn addition, we now have included results from the experiments on synthetic data from GNNExplainer, which support the above arguments. The ground truth of the dataset contains five nodes, and ZORRO keeps the explanation size small by choosing on average 2.48/5.42 nodes for $\\\\tau=.85$ resp. $\\\\tau=.98$.\\n\\nQ1) We choose the same computational graph as the underlying GNN algorithm. Specifically, for a k-layer GNN, we use the node's k-hop neighborhood as the computational graph. We now added a background section of GNNs (section 3.1) and described the used computational graph more clearly in Section 3.2. \\n\\nQ2) Lines 13 and 14 of Algorithm 3 perform recursive calls to Algorithm 3, which allows retrieving multiple explanations.\"}",
"{\"title\": \"Response to Reviewer #2 (1)\", \"comment\": \"Thanks for the kind and positive comment. We view our paper as a step towards a better evaluation of post-hoc interpretability approaches for GNNs, and we are pleased about the constructive comments that allow us to reflect on the limitations of the current evaluation regimes.\", \"our_detailed_response_to_your_raised_points\": \"A) We intend to exactly shed light on the multiple-explanation limitation of the existing GNN explanation approaches. Unlike existing approaches that output a SINGLE explanation, we experimentally show that multiple explanations exist with better or similar fidelity as the soft explanation. Therefore, an expectation that there is ONE perfect explanation might indeed be misplaced. There are multiple possible reasons for a given prediction based on the feature and neighborhoods, and outputting only one of them is akin to not making the user aware of the real evidence. We choose to output multiple disjoint explanations as a lower bound on the potential number of explanations that could exist. We expect that there are many overlapping explanations.\\nWe accept the concern that the beneficiary of multiple explanations would not guide the user to improve the model or localize potential spurious correlations. Still, through this paper, we want to question the assumption (or expectation) that a single explanation provides a complete picture of the inner workings of a GNN. \\n\\nB) We accept the claim that explanations are meant for users and hence should be evaluated by humans. However, in our opinion, what can be assessed by a human is the explanation style and not the effectiveness of explanations. In this regard, soft masking techniques that output a probability distribution over features or nodes are well-known to be less interpretable to humans than hard masks as explanations. This is especially true when dealing with large number of features as in our experiments (Cora has > 1K features, and some nodes have a very high outdegree).\\nHowever, we would want to point out a central limitation of existing works in the human evaluation of explanation methods. Human evaluation regimes where a human is showed multiple explanations and is asked to choose the best explanation is fraught with multiple biases and is not a true indicator of \\u201cgoodness of the explanation\\u201d. Why? Imagine there are two explanations to a GNN prediction. \\n(a) An incorrect explanation that corresponds to human understanding (say Homophily) and \\n(b) another correct that does not align with human understanding (a spurious correlation perhaps). \\nIn such cases, the Human evaluation might incorrectly evaluate the wrong explanation. This has been routinely observed for post-hoc explanation methods in text and images (cite Rigorous Science of Interpretability by Been Kim & Doshi Velez). All post-hoc interpretability approaches face similar threats when using human explanations. Consequently, we propose an information-theoretic point of view: If an explanation is correct, it must be predictive.\\n\\nC) We have included some examples for the newly added synthetic dataset, which showcase our approach's results. Please take a look at Figures 13 and 14.\\n\\nD) This a valid comment about the stakeholders that we target. We make it more explicit that our explanations are mainly targeted towards model builders, designers, and practitioners of GNNs. We see Zorro's utility in the evaluation and debugging phase to get insights into the model under question. Our utility experiments showcase one possible way in which Zorro was used to create visualizations that shed some light on the learning behavior of GNNs.\"}",
"{\"title\": \"Interesting method - but no empirical evidence of whether explanations are meaningful\", \"review\": \"The authors propose ZORRO, a post-hoc explanation method for node classification with graph neural network architectures. ZORRO leverages rate-distortion theory to generate masks that select nodes in the target node neighbourhood and their most important features.\\n\\n* The problem is very relevant to the GNN community, and I am glad to see more works coming in on this topic.\\n* The idea of relying on rate-distortion theory is interesting and original, and to the best of my knowledge this is the first time I see it used to tackle this research problem.\\n* The paper is well structured and organised.\\n* The original contribution is sufficient.\\n* Related work is sufficiently well covered.\\n\\nNevertheless, the paper suffers from shortcomings:\\n\\nA) The paper claims that ZORRO can generate multiple, disjoint explanations, apparently all highly faithful. This seems to be at odds with the authors' claim to explain the behaviour of the model. In other words, if I were on the receiving side and I was given multiple _disjoint_ explanations, which one should I trust more? How can explanations shed a light on the behaviour of models if they are disjoint? I see ZORRO is able to generate overlapping explanations as well (a property compatible for example with example-based technique in XAI literature such as counterfactual explanations), but I have mixed feelings on the effectiveness of disjoint explanations in practice.\\n\\nB) A drawback of this work is the complete absence of human-based evaluation. I acknowledge explainable AI literature is ripe with examples of accepted papers, but the authors seem to deliberately disregard this aspect (\\u201cWe [.. ] are not interested if an explanation is congruent to human understanding\\\", Sec1). If humans are not important, then what is the reason you explain your predictions? If the goal is limiting to debugging a model, perhaps the narrative should be revisited. All in all, I believe users should be central in an XAI piece, and papers in this area should help the reader understand if the generated explanations meet users expectations - even in a ML conference such ICLR, even for a 8-page paper.\\n\\nC) The paper does not include any examples of the generated explanations. It is hard to figure out if the claimed fidelity brings meaningful results in practice, and the reader is left with this doubt. Aside from a full-fledged evaluation campaign (see A. above), some examples would really help make the case. \\n\\nD) Experiments do not include any evidence of whether ZORRO explanations work in practice. Besides, as I mentioned above, the author should probably clarify which audience they are targeting (i.e engineers debugging a model, end users trying to understand the reasons for a specific model outcome, etc.)\\n\\nE) I was expecting experiments to assess the impact of \\\\tau (the user-defined fidelity threshold). The authors experiment with .98 and .85 - and \\u201cthe choice of \\\\tau has limited influence\\u201d, but I would have expected empirical evidence for such statements (i.e. experiments on a wider range of \\\\tau, to assess size and fidelity). If choosing a desired fidelity is not crucial, the paper should show so.\\n\\nF) There are not experiments on runtime complexity. The reader is left without evidence of how long it takes to generate an explanation for a target prediction.\\n\\nG) It is unclear if ZORRO achieves more faithful and also smaller explanations than GNNExplainer. Sec 4.2 suffers from clarity issues, as long as Figure 5.\\n\\nH) Some sections could be better clarified, to help the reader understand important aspects of the work: Example: In sec1, the \\u201cnotations\\u201d paragraph would benefit from proofreading and re-wording. Sec 4.2 could also be refined.\", \"minor\": [\"Size matters for explanations, but smaller does not always mean better. For example, in medical decision support systems, some clinicians may prefer longer and more thorough explanations. Your milage may vary.\", \"Figure 5 is poorly legible.\", \"Some typos along the way (e.g. \\u201cdenotes the binary column vector of selected nodes and Fs denote the binary row vector of selected nodes \\u201c sec 1, \\u201cEffectively the complete input is presented as an input\\u201d in 4.2)\"], \"questions_for_the_authors\": [\"Q1) How does ZORRO decide the size of each computational graph to work with (i.e. the size of the neighbourhood)?\", \"Q1) It is not entirely clear to me how you ZORRO generates multiple explanations? I could not find how this is done in Algorithms 1-3. Could you please clarify?\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Official Blind Review #1\", \"review\": \"This work proposes to explain graph neural networks using hard masking techniques. Specifically, it tries to find the node mask $V_s$ and feature mask $F_s$ which can identify the most important information of the input such that the masked information can yield a high fidelity score. This work proposes a greedy method, ZORRO, to explore these hard masks, which can be used as the explanations of the prediction. Experimental results are interesting and promising.\", \"strengths\": [\"The task is very important. GNNs are very popular but they are mostly treated as black-boxes. Interpreting GNNs is still less studied.\", \"Compare with GNN-Explainer, this work focuses on using hard masks to explain GNN predictions. It is a reasonable choice since soft masks, which are used in GNN-Explainer, may introduce new semantic meaning or noise to the node representations since these representations are very sensitive.\", \"Experimental results are very interesting. First, there exist multiple explanations for the same input graph that they both lead to high fidelity scores. Second, the proposed method can obtain high fidelity scores than GNN-Explainer and more sparse explanations. In addition, this works studies several types of GNNs, such as GCN, GAT, GIN, APPNP.\"], \"weaknesses\": \"- The connection between the proposed method and data compression is not convincing. From my understanding, it belongs to the masked-based interpretation methods, which is widely studied in other domains, such as image and NLP. Then I do not think it is something new from other fields--data compression in information theory.\\n- In the proposed method, all nodes share the same feature mask $V_s$. Is it a proper choice? Is it possible that different nodes may have different important features? Then probably it is better to not share the $V_s$?\\n- In the proposed method, the ordering information $R_V$ and $R_F$ are stored. It is computed in the beginning and keep fixed for later steps. However, in the later steps, the algorithm will update the $V_S$ and $F_s$, then why do we use the same ordering information? Top nodes, in the beginning, may not be top any more after some nodes/features selected?\\n- The method itself is very straightforward, which is a simple greedy algorithm. Then I believe the technical contribution may not reach the bar of ICLR. \\n- The algorithm is not clearly explained. What\\u2019s the meaning of $V_r$, $F_r$, $R_{V_p}$, and $R_{F_p}$, etc.? How are they initialized?\\n- For the comparisons with GNN-Explainer, we need to see some real examples\\u2014explanations for both correct predictions and incorrect predictions. It is not enough to just report numerical numbers.\\n\\n\\nI am willing to adjust my score if my concerns are properly addressed.\\n\\n=====Update after rebuttal=====\\n\\nI have read the authors' rebuttal. However, my concerns are not well addressed. \\n\\n1. There are a lot mask based methods for interpretation in different domains [1] [2] [3] [4]. Existing methods [1][2][3] are providing post-doc explanations for a pretrained model. I still believe \\\"the connection between the proposed method and data compression is not convincing\\\". \\n\\n2. I still believe the novelty is limited. \\n\\nHence, I am keeping my score unchanged. \\n\\n[1] GNNExplainer: Generating Explanations for Graph Neural Networks, NIPS 2019\\n\\n[2] Real Time Image Saliency for Black Box Classifiers, NIPS 2017\\n\\n[3] Learning to Explain: An Information-Theoretic Perspective on Model Interpretation, ICML 2018\\n\\n[4] Rationalizing Neural Predictions, EMNLP 2016\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Method for explaining GNN behaviour with room for improvement\", \"review\": \"The authors address the problem of explaining the behaviour of graph neural networks (which operate on a computation graph based on their k-hop neighbourhood) such as a graph convolutional network (GCN).\\n\\nThe core idea is to identify, for each node v in the graph, the nodes and features of the graph most relevant to the behaviour of the GNN for node v. That is, the goal is to find a subgraph of the computation graph associated with a node v in the graph. Importantly, the authors propose to test whether the chosen subgraph is relevant (and the complement wrt the computation graph irrelevant) by adding random noise on the parts deemed irrelevant by their method. \\n\\nThe method is evaluated through a metric called fidelity, which is the agreement in label output between the behaviour of the original and masked GNN, in expectation over the noise distribution. \\n\\nWhile overall a well-written paper, a source of confusion is the authors tendency to conflate the computation graph and the graph to which the GNN is applied. The most important notion here is S, defined as a subset of the computational graph. When defining this, it is important to also define precisely this computation graph. What does it look like (abstractly, independent of the GNN instance used)? For instance, there is a nice compact way to unify most message-passing neural networks. (see e.g., https://pytorch-geometric.readthedocs.io/en/latest/notes/create_gnn.html) \\nWhen you look at this definition, you see that there are several learnable functions (phi, etc.), the aggregation function, and finally the classification layer. Now, in your definition of a computation graph, what are the nodes? Are the applications of the learnable functions each a node? What about the aggregation (one node?). Again, I think for the reader to fully understand how your explanations S look like, this needs to be rigorously defined. My assumption here was that the computation graph groups computations such that nodes in the computation graph and nodes in the graph to which the GNN is applied coincide. Generally, I think you should spend more effort on section 3. The notation in the argmax statements in section 3.1 is also strange. For instance, S is defined as a pair. So it should be written as argmax F((V_p, {f})). Also, what is the p here? \\n\\nAnother worry I have is the efficiency of the approach. If your average number of features and nodes that exceed the fidelity threshold is K (the average size of S) and the graph has N nodes and F features, you need to evaluate the GNN KN+KF times to obtain an explanation for one node. For large graphs and/or graphs with numerous features, this can be expensive. And this is for the case when you compute the expectation with one Monte-Carlo sample from the noise distribution.\\n\\nThe most disappointing aspect of the paper, however, is the experimental evaluation. Sure it is interesting to assess the multiplicity and size of explanations. What would be more interesting, however, is to evaluate how faithful your explanations really are. And here is where I have a disagreement with your assumptions. You write \\u201c[...] that is completely faithful to the model i.e., the explanation achieves the fidelity value of 1.\\u201d But achieving a fidelity of 1 does not mean that your explanation is faithful. We could only know this if you removed the nodes and features and retrained the model with the same seeds/initialization. There is an intricate interplay between the nodes and features during training of a GNN. What you evaluate is how close to the original behaviour the GNN is when you remove certain nodes and features. But I would question whether this is a proper definition of faithfulness. \\n\\nMy suggestion would be to also run experiments where you retrain GNNs and check whether the behaviour is indeed such that removing the nodes and features your method deems unimportant leads to a minor change in behaviour. \\n\\nThe synthetic experiments of the GNNExplainer paper are not included. But it makes sense to me to define synthetic graph classes where the presence of certain features/nodes is known to cause the node label by construction. This way one can check whether those features are the ones identified by the XAI method. I would encourage the authors to also run these experiments and compare to the results from the GNN explainer. \\n\\nFinally, it is not entirely fair to compare other methods to yours through the notion of fidelity alone. Your method is defined to optimize for it. As I mentioned above, fidelity is one way to measure the quality of a reduced graph but not the only one. It is by no means the only one to measure \\u201cfaithfulness,\\u201d as I have outlined above. For instance, it would also make sense to compare based on the measures introduced in the GNNExplainer paper.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
oVz-YWdiMjt | Single Layers of Attention Suffice to Predict Protein Contacts | [
"Nick Bhattacharya",
"Neil Thomas",
"Roshan Rao",
"Justas Daupras",
"Peter K Koo",
"David Baker",
"Yun S. Song",
"Sergey Ovchinnikov"
] | The established approach to unsupervised protein contact prediction estimates coevolving positions using undirected graphical models. This approach trains a Potts model on a Multiple Sequence Alignment, then predicts that the edges with highest weight correspond to contacts in the 3D structure. On the other hand, increasingly large Transformers are being pretrained on protein sequence databases but have demonstrated mixed results for downstream tasks, including contact prediction. This has sparked discussion about the role of scale and attention-based models in unsupervised protein representation learning. We argue that attention is a principled model of protein interactions, grounded in real properties of protein family data. We introduce a simplified attention layer, factored attention, and show that it achieves comparable performance to Potts models, while sharing parameters both within and across families. Further, we extract contacts from the attention maps of a pretrained Transformer and show they perform competitively with the other two approaches. This provides evidence that large-scale pretraining can learn meaningful protein features when presented with unlabeled and unaligned data. We contrast factored attention with the Transformer to indicate that the Transformer leverages hierarchical signal in protein family databases not captured by our single-layer models. This raises the exciting possibility for the development of powerful structured models of protein family databases. | [
"Protein Structure",
"Proteins",
"Contact Prediction",
"Representation Learning",
"Language Modeling",
"Attention",
"Transformer",
"BERT",
"Markov Random Fields",
"Potts Models",
"Self-supervised learning"
] | Reject | https://openreview.net/pdf?id=oVz-YWdiMjt | https://openreview.net/forum?id=oVz-YWdiMjt | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"R_enki4OIUY",
"bJ5cx8ucEKl",
"ZynkDfMTa3D",
"4HNZ8zBrWOK",
"BTIwWCHQhm",
"BrabluwryDn",
"TOpyRElKJ5O",
"uy5Q9CRRBRy",
"GSbuA-yEYY",
"uBqGHjM1FQb",
"35c7F7Aganm",
"A7p-BK8Hhd",
"AowgLbK53ne"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040418142,
1606193534585,
1606193498216,
1606193378583,
1606193289122,
1606192990599,
1606192817802,
1606192776812,
1606192393129,
1604508837682,
1603857013076,
1603783439640,
1603204319672
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3397/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3397/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3397/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3397/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3397/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3397/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3397/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3397/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3397/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3397/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3397/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3397/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"The paper shows a connection between Potts model and Transformers and uses the connection to propose a factored attention energy to use in an MRF. Results are shown, using this energy based on factored attention. Also, pretrained BERT models are used to predict contact maps as a comparison.\\nThe reviewers found the paper interesting from a protein structures prediction point of view, but from a machine learning perspective their opinion was that the paper does not offer a coherent, compelling method that is very novel, and the connection between Potts and an energy based attention model is not that overwhelming. In addition the presentation was somewhat circuitous. \\n\\nThe authors made improvements to the paper over the course of the review, which is appreciated, but the method presented does not match the target for an ICLR paper in terms of methodological contributions.\"}",
"{\"title\": \"Response to Reviewer 2 (Continued)\", \"comment\": \"> Section 3.2, \\u2018x = E_seq(x_i) + E_pos(i)\\u2019: How did you compute positional embeddings and why do and add embeddings instead of concatenating them?\\n\\n> Section 3.2, \\u2018We treat the positional embedding E_pos as an overall summary per-position information\\u2019. Please describe more clearly what this summary is.\\n\\nIn the single-layer attention model, we use a learned positional encoding and sequence embedding. Each single-layer model is trained separately on the aligned positions of a family, so we take the positional encoding to represent the information carried by each position in the MSA. This motivates our contact extraction procedure of using only positional encoding to extract contacts for an MSA. We sum the sequence and positional encodings to make our single-layer attention directly comparable to the Transformer and we believe concatenation would be interesting to explore for follow-up work. We provide these details in Section A.2. \\n\\n> Section 4, first paragraph: The L of the precision at L metric is not the sequence length but the number of top sequences. You describe L as being both.\\n\\nWe appreciate this comment from the reviewer. We implement precision at L as defined for the CASP competition, which uses length for L when selecting top predicted contacts. See \\u201cAssessment of contact predictions in CASP12: Co-evolution and deep learning coming of age\\u201d by Schaarschmidt et al 2018 and \\u201cAssessing the accuracy of contact predictions in CASP13\\u201d by Shrestha et al 2019. We have clarified our presentation of precision at L in the main text and added these citations to provide context.\\n\\n> Figure 6 is not discussed. Instead of showing this figure, I suggest quantifying the correlation depending on the number of heads by computing and discussing the Spearman correlation.\\n\\nWe agree that this result received insufficient discussion in the original manuscript. We now include plots of both pearson and spearman correlation with Potts weights as Figures 11 and 12 in the supplement. Note that we have also expanded this experiment to compute weight correlations on the full set of 748 families.\\n\\n> Rives et al 2020 \\u2018Biological structure and function emerge\\u2026\\u2019 have recently shown in addition to Vig et al that protein contact can be predicted from attention maps, which must be also pointed out in the \\u2018Background\\u2019 section.\\n\\nWe thank the reviewer for this suggestion. In our new Background section \\u201cSupervised Contact Prediction,\\u201d we have now cited Rives et al 2020, mentioning that they used Transformer embeddings as input features to linear projections and deep residual networks for supervised contact prediction\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"> The paper is clearly written and the evaluation is solid. I have only a few comments.\\n\\nWe appreciate reviewer's positive feedback.\\n\\n> What is the maximum sequence similarity between the training sequence of ProtBERT and sequences in TrRosetts alignments that were used for testing? Sequences must not overlap have a maximum similarity of let\\u2019s say 80%.\\n\\nWe thank the reviewer for this comment. Contact extraction from ProtBERT-BFD is entirely unsupervised. ProtBERT-BFD does not see any contact maps from trRosetta during training. For extracting contacts from ProtBERT-BFD, we do not use the trRosetta alignments, but only the exact reference sequence corresponding to the PDB chain in question.The 500 families used for identifying the top 6 contact-prediction heads from ProtBERT-BFD are disjoint from our 748 test families.\\n\\nWe also note that, typically, the most valuable sequences for improving unsupervised contact prediction are not sequences with high sequence identity to the target, but instead distant homologs with low sequence identity (see \\u201cAssessing the utility of coevolution-based residue\\u2013residue contact predictions in a sequence- and structure-rich era\\u201d Kamisetty et al., 2013). As such, we do not think overlap of close homologs between the trRosetta alignments and BFD pretraining set would introduce biases into our results. \\n\\n> You describe that you used three sets of families from the TrRosetta dataset (A.4.1). Why did you use only 732 families for testing (set 3)? Were these all families that were not included in the first two sets? How many families do the first two sets include and how similar are families of different sets? Ideally, train, tune, and test families belong to different super families.\\n\\nWe appreciate these questions and have clarified our setup in Section A.6.1 of the revised paper. We use a small set of six families to find settings of learning rate and weight decay for standard attention. We identified a set of ten challenging families for an early version of factored attention and found settings of learning rate and regularization which attain reasonable performance on all families. We have now updated our evaluation procedure to uniformly evaluate across all 748 families.\\n\\nWe note that the risk of overfitting in our setting is extremely low, as we are not training supervised contact prediction. All single-layer models are trained from scratch for each family (except for the new value-matrix sharing experiment, where values are frozen). Our sweeps were performed to find learning rate and regularization coefficients which did not crash or have serious performance issues. We believe that our models are actually under-optimized compared to the Potts baseline, as the regularization and optimization of those models has been tuned over many years by the structure prediction community. \\n\\nThe reviewer\\u2019s suggestion of using superfamilies for model development is an insightful one and we plan to incorporate it into future work.\\n\\n> You describe in section A.3 how you extracted protein contact maps from the attention maps of ProtBERT. This is an important detail that must be described in the main text. How did you choose the 6 heads? Did you choose them manually or, for example, by training a linear model to predict contacts from attention maps and using the weights for identifying important heads, or computing the weighted average of attention maps?\\n\\nThank you for pointing out this omission, and we agree that this merits further explanation. We have added a Section 3.3 titled Extracting Contacts in which we describe our procedure for selecting the heads from ProtBert-BFD. We also provide a table of precisions for ProtBERT heads in Table 2 of Section A.5. We select the six best individual heads whose attention maps had the top average contact precision on 500 families randomly selected from the trRosetta dataset and not in the set of 748 test families. We extract contacts from ProtBert-BFD by averaging the LxL attention maps from these six heads, then symmetrizing additively.\"}",
"{\"title\": \"Response to Reviewer 1 (Continued)\", \"comment\": \"> The ablation study makes me feel that the results are on the opposite of the conclusion. Here is my logic. With the above two assumptions, the attention model can achieve similar performance as the Potts model, or a little bit better. However, when we train on the unaligned sequences, which is the usual case that we would use the attention model, the performance becomes unacceptable. Then why we want to use the more expensive attention model? The attention model in the NLP field is a different story. Those models are refreshing the STOA performance all the time. However, in the protein field, the attention model can still only achieve comparable performance as the classific models, after a two-year study. They seldom outperform classic algorithms. The results in this manuscript are consistent with the previous research. So I am not convinced regarding the conclusion in the abstract: \\\"Taken together, these results provide motivation for training Transformers on large protein datasets.\\\"\\n\\nThese comments from the reviewer have been very helpful in helping us improve the paper. We agree wholeheartedly with the reviewer that attention models applied to proteins have not succeeded until they have pushed state-of-the-art in ways not previously imaginable. The purpose of our paper is to provide a clear argument that this is possible, but we do not claim to have achieved it yet. In our update of the paper, we have clarified that the remarkable aspects of ProtBERT\\u2019s performance are not improved precisions, but the fact that ProtBERT was not trained with any protein family labels, data clustering, or even knowledge of the existence of protein families, all of which are available to Potts. We believe that understanding how this is possible through pretraining is an important task for the ML community. Our main contribution is identifying that modeling interactions within families by attention exists on a continuum which includes Potts models. We then use single layer models to identify a set of explicit properties of protein families, such as common amino acid interactions and sparse contacts, and show that attention leverages them for single families. The fact that our single layer models trained on MSAs match the performance of both Potts and ProtBERT validates our claims that there are natural modeling assumptions that benefit attention. These experiments, along with successful sharing of frozen value matrices across hundreds of protein families, provides evidence that development of sophisticated multifamily models is fertile ground for future exploration.\\n\\n> The potential audience of this paper would be those who are specialized or interested in bioinformatics and protein.\\n\\nWe believe that our work provides a point of entry into protein modeling for the broader representation learning community, and helps focus future work. This is particularly relevant to those not already specialized in bioinformatics or protein ML, since we suggest that there is novel ML work to be done beyond adapting BERT variants to protein data.\\n\\nOur work also provides a constructive exploration of the capabilities of Transformers applied to proteins. Whereas BERTology-based work tries to probe the weights of a Transformer and disentangle what it\\u2019s doing, we take a principled approach of simplifying Transformers into a few key elements and showing that it still succeeds. We think the success of this kind of analysis on protein data shows that protein representation learning can contribute insights to the study of attention.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"> The manuscript is concise and easy-to-understand.\\n> The idea is intuitive and reasonable, with experimental support.\\n\\nWe appreciate these kind comments. \\n\\n> The analog between the simplified attention model and the Potts model is intuitive but not rigorous. The authors claim that they provide a theoretical connection between the two models. However, that part is not strong enough, without proof.\\n\\nWe have reorganized Section 3 (\\u201cMethods\\u201d) to clarify the formulation of Factored Attention as a pairwise MRF. In our new treatment, we directly define factored attention using the energy functional, rather than as a simplification of attention. We hope this clarifies that the simplified attention model is a pairwise MRF by definition. We also believe this new exposition conveys the modeling assumptions in factored attention much better, and we are grateful to the reviewer for this feedback.\\n\\n> There are two assumptions in this work, which make the simplified model different from the attention models that the previous researchers used. Firstly, they train the model on multiple sequence alignment instead of the raw sequences. If they train the model on the raw sequences, the performance is unacceptable, as shown in Figure 16, which is consistent with the previous research. Secondly, they removed the sequence embedding in queries and keys. This simplification makes the model only consider the statistical pattern in the MSA. To me, this one is a too strong assumption.\\n\\nWe chose to make these assumptions in order to understand how these particular properties of protein families impact performance in isolation and to give a concrete example of models that live between the Transformer and Potts. Our goal is not to propose factored attention as an isolated model which outperforms all others, but to increase understanding of the Transformer\\u2019s success and highlight new avenues for model development. We have considerably reworked the methods and discussion to better communicate this goal and make our stance clear. We believe this has significantly improved the quality of our exposition, so we thank the reviewer for this comment.\\n\\nWe agree with the reviewer that our assumptions are considerably stronger than the assumptions made by a pretrained Transformer. We would also note that MSAs are readily available in the unsupervised contact extraction setting where Potts models are currently state-of-the-art.\\n\\n> The running time and hardware comparison is missing. If the single layer of attention is comparable to the Potts model, not outperform it significantly, while it would take much more time to train, the researchers would need to think twice if they want to use the attention model.\\n\\nWe appreciate the reviewer\\u2019s emphasis on computational efficiency. We have added discussion about performance tradeoffs to our section on standard attention. Further, we have provided a detailed study of throughput for all models on various MSA lengths in Section A.3. This includes a supplemental table giving batches/second at various lengths for Potts, factored attention, and single-layer attention. In addition to throughput, we now discuss gains in parameter efficiency more clearly in the results section. These show that factored attention can model the 748 test protein families with 11 billion fewer parameters than Potts (a 91% reduction).\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"> The paper is well written. I appreciate the effort put in by the authors to define basic protein terminologies which might not be obvious to readers without biology background.\\n\\nWe are grateful for this positive feedback from the reviewer.\\n\\n> The contributions of the paper would have been more interesting if the proposed modifications of the attention layer led to increased prediction performance of models which are representative of the state-of-the-art. Specifically, if retraining ProtBERT-BFD using the modified attention layer led to further improvement in performance, that would have been a solid contribution.\\n\\nWe thank the reviewer for these comments. The goal of our paper, as reflected in our updated draft, is to highlight the potential of models between the two extremes of Potts and Transformers for learning powerful features from protein databases. We don\\u2019t present factored attention as an isolated modeling advance, but as a tool for exploring how the modeling assumptions made by attention leverage underlying signal in protein data.\\n\\nIn light of this goal, we have expanded our results to highlight advantages of factored attention that stem from modeling assumptions. Our experiments show that factored attention can do as well as Potts with substantially fewer parameters, in some cases using only a handful of heads. We have also shown that factored attention can train on all 748 families using only one shared set of amino acid features (value matrices), indicating the potential for increased parameter sharing within families. \\n\\nWe would also like to clarify that composing multiple layers of factored attention can not be done simply like with attention, since position and sequence can not be disentangled after the first layer. We believe finding ways to compose modified attention layers presents an interesting avenue for future work and have mentioned this in the Discussion.\\n\\n> Are MRF models really that competitive for contact map prediction? From what I understand, deep neural networks have been far better at this task for quite some time now. At multiple places in the paper, the authors give the impression that MRF models are close to state-of-the-art.\\n\\nWe appreciate this question from the reviewer and have added a new Background section titled \\u201cSupervised Structure Prediction\\u201d to help resolve this confusion. Existing deep neural network approaches for contact prediction take a supervised approach, training with pairs of the form (MSA, contact map). All approaches evaluated in this paper are trained only on sequence with no supervised signal from known contact maps. MRF-based features, also known as \\u201ccoevolutionary features\\u201d in the literature, are essential inputs to the supervised deep neural networks mentioned by the reviewer. (See the reviews of CASP 11 and 12 performance for the importance of coevolutionary features in neural network performance: Monastyrskyy et al., 2016 and Schaarschmidt et al., 2018) \\n\\n> In the last paragraph of the introductory section, the idea of encoding the MSAs is introduced which seemed interesting. However, from what I understood from the rest of the paper, the queries and keys are extracted solely based on the position of the amino acid. Is that right? If so, does the position correspond to the position in the sequence or in the MSA? Are the actual alignments used in any of the results in the paper? Please clarify.\\n\\nWe have clarified these questions in the updated draft, and we believe it has greatly strengthened the paper. We use position in the aligned sequence for our positional encoding, rather than position in the unaligned sequence. Factored attention computes its queries and keys using only this MSA positional encoding, while the single layer of attention uses both position and sequence. All results for single-layer models (Potts, factored attention, and single-layer attention) are from training on MSAs except for the single ablation study shown in Figure 21.\\n\\n> Section 3.1: \\\"each edge\\\" should have a capital e.\\n\\nWe appreciate this comment and have fixed it in the manuscript.\\n\\n> Section 3.3, specifically the part where you show that factored attention is a pairwise MRF, is too brief. Given that this is a main contribution of the paper, it would be worthwhile to explain this connection in a more detailed manner.\\n\\nThis comment was very helpful to us and we have followed the reviewer\\u2019s advice in rewriting our Methods section. We have moved the mathematical discussion to the supplement and now focus on the underlying modeling assumptions that give rise to factored attention. We hope this greatly clarifies how factored attention differs from Potts as a pairwise MRF and also explains why factored attention (and attention more broadly) is a natural model class for protein families.\"}",
"{\"title\": \"Response to Reviewer 4 (continued)\", \"comment\": \"> The authors state that \\u201cThe ability of factored attention to capture similar contacts to Potts without use of APC suggest that it may be more suitable for protein design.\\u201d I don\\u2019t follow this conclusion. If the factored attention model performs equivalently to the Potts model alone and worse than the Potts model with APC correction, why would it be more suitable for protein design?\\n\\nWe have removed this sentence as part of focusing our discussion.\\n\\nThis particular discussion was based on the paper \\u201cAn evolution-based model for designing chorismate mutase enzymes\\u201d by Russ et al (2020), which includes a discussion on how sequences sampled from Potts models do not match underlying MSA statistics, indicating poor sample quality. Sampling does not involve APC, so models that can improve on performance of Potts without APC could be fruitful for sequence generation.\\n\\n> What makes the single-layer attention or factored attention models compelling for protein modeling? What problems do these models solve that are not better solved by the Potts model or traditional transformers?\\n\\nWe are grateful to the reviewer for these important questions. The goal of our paper, as reflected in our updated draft, is not to present factored attention as an isolated advance over existing Potts models or Transformers, but instead to demonstrate that there exists a huge unexplored space between these two ends of the spectrum. We are, to our knowledge, the first work to clearly explain that Potts models and Transformers represent two extremes for modeling databases of protein families, and we see our single-layer models as an essential part of demonstrating that there exist interesting models in the middle. \\n\\nAs part of addressing the reviewer\\u2019s concerns, we have reworked our experiments to more clearly highlight interesting phenomena of factored attention not immediately available to either Potts or Transformers. Our new results more carefully show that factored attention is able to use relatively few heads for recovering L or L/5 contacts and that factored attention is able to successfully match the performance of Potts models on all test families using one frozen set of amino acid features. These results demonstrate that the parameter sharing introduced by factored attention can drastically reduce the number of parameters needed compared to Potts, while making more explicit assumptions about protein families than Transformers. We have also laid out empirical advantages of ProtBERT-BFD over our single layer models as evidence that even more powerful multifamily models based on scientifically grounded assumptions may exist.\\n\\n> Present a compelling use case for the factored attention model. What questions can be answered (or better answered) with this model over the Potts model or other alternatives? One idea is to use the factored attention model as the layers in a full deep transformer model and see if this architecture can improve tasks where MSA training data is available.\", \"we_reproduce_our_main_contributions_stated_in_the_global_comment\": \"1. We show that attention can be linked to Potts models using purely biological assumptions about protein data, and provide evidence that these assumptions are borne out in protein structural data. \\n2. Empirical evaluations of factored attention\\u2019s performance show that these assumptions lead to competitive performance with Potts models, and show that they lead to increased parameter-efficiency on long families. We also present the ability to tie value matrices across all families as evidence that multifamily hierarchical structure is readily accessible even to single attention layers.\\n3. We show that ProtBERT-BFD can learn contacts competitively with Potts models over a wide range of protein families. This builds on recent work from Vig et al, but the work of Vig et al does not carefully compare contact prediction metrics with an optimized Potts model implementation. We believe this is an encouraging result that suggests pretraining Transformers on proteins merits further work.\\n4. We contrast the performance and parameter efficiency of ProtBERT-BFD and factored attention to suggest the existence of even richer unrecognized hierarchical structure exploited by pretrained Transformers and not leveraged by either Potts, factored attention, or single-layer attention.\"}",
"{\"title\": \"Response to Reviewer 4\", \"comment\": \"> The methods section is somewhat confusingly written. I think the factored attention model would benefit from being described on it\\u2019s own terms rather than in connection with typical multiheaded attention, especially because the isolation of position encodings and amino acids at those positions dramatically simplifies the understanding of W_Q, W_K, and W_V.\\n\\nWe found these comments from the reviewer very helpful when rewriting our Methods section for the updated draft. We have moved the discussion linking factored attention and attention to Section A.2, and have instead presented both factored and single-layer attention by adding assumptions to Potts models step-by-step. We agree with the reviewer that a direct presentation of factored attention is much clearer. One of our goals in this paper is to highlight why attention makes use of natural properties of protein family data, and we believe this new exposition contributes to that. We hope the reviewer finds the new structure much improved. \\n\\n> The connection between the Potts model and attention described in this paper should be obvious to those who already understand attention models and Potts models and the empirical results of the factored attention model don\\u2019t make this approach seem compelling. \\n\\nWe agree that the connection between Potts and attention described in this paper is mathematically simple, but we see this as a clear advantage of our paper. Our goal in introducing factored attention is to break down various aspects of the Transformer in a protein-specific setting, relate them to existing state-of-the-art Potts models, and understand how much these particular assumptions impact performance. We also believe that, while those of us who have spent the effort to understand both attention models and Potts models do find the connection in this paper rather self-evident, it is of broad use to the community for it to be spelled out and validated empirically in an application domain of significant scientific importance. We have also expanded our results section to more clearly lay out insights brought by factored attention which are not readily apparent from either Potts models or ProtBERT. We believe this context further demonstrates the value of our approach.\\n\\n> In the discussion, the authors make several broad future speculations. Some of these would be interesting contributions and I encourage the authors to develop this work further. \\n\\nWe appreciate this comment from the reviewer. As part of our rewrite, we have considerably shortened our discussion and focused on the aspects of multifamily models and pretrained Transformers not explored by our analysis. Speculations about the role of APC and the quality of sequences sampled from factored attention have been removed to help increase focus.\\n\\n> Maybe factored attention could be promising for better capturing dependencies between positions for deeper transformers on MSAs, but it isn\\u2019t likely that this work will be of broad interest to the machine learning community. This manuscript seems better suited to a workshop or other specialized venue. \\n\\nWe appreciate this comment and have worked to better clarify what aspects of this work are of interest to the broader ML community. We believe protein representation learning is a topic of considerable interest in ML and that our work suggests new avenues, beyond pretraining Transformers, to the community.\\n\\n> In the factored attention model, the authors use one-hot encoding of the position index as the position encoding. This is equivalent to learned position embeddings as in BERT which is worth mentioning.\\n\\nThis is a helpful observation from the reviewer which we have added to the discussion in Section A.2.\\n\\n> The authors discuss single-site potentials as a difference between Potts models and single layer attention models and then show a comparison of attention models with and without single-site potentials showing little difference. However, attention models already implicitly have single-site potentials which arise from the positional encoding input features. Granted, this is not the case for the factored attention model where single-site potentials seem to have more effect, though in the negative direction.\\n\\nWe are grateful for this observation from the reviewer that the positional encoding input to single-layer attention model creates a single-site term implicitly. We have added discussion on this point to Section A.2, \\u201cImplicit single-site term in single-layer attention.\\u201d\"}",
"{\"title\": \"Global Comment to All Reviewers\", \"comment\": \"We are very grateful to the reviewers for their comments and suggestions, which have helped us significantly to improve our paper in many aspects. We have worked hard to incorporate feedback from all reviewers in our updated draft. We have provided detailed responses to each reviewer, but would like to outline a few overall changes to the paper.\\n\\n__Methods Section:__ All reviewers commented that the methods section could be more accessible. We have now reworked it to more clearly state the modeling assumptions of both factored and single-layer attention. This also clarifies the connection to Potts models. We hope the reviewers find this section much improved.\\n\\n__Improving Loss Functions:__ We realized the regularization we used for factored attention did not match the regularization used for Potts. We have fixed this and updated all results. We provide a discussion of regularization in the Methods section and Section A.4 of the Appendix.\\n\\n__Expanded Exploration of Hyperparameters:__ We have expanded our experiments on the impact of number of heads and head size, running each configuration on the entire set of 748 families rather than 10. The results section has been updated and expanded accordingly. We have also highlighted a specific example of interest in Figure 5 which shows that only 4 heads can be used to extract contacts for a particular family.\\n\\n__Added Experiment on Shared Amino Acid Features:__ We have added an experiment on parameter sharing across families. Our paper mostly focuses on the capacity of attention to share parameters across positions within a single family, but we believe an initial exploration of sharing across hundreds of families highlights the exciting work yet to be done on attention-based hierarchical models of many protein families.\\n\\n__Clarifying Contributions and Position:__ Many reviewers asked us to clarify the use-case and goals of introducing factored attention. Our paper addresses the broader questions around pretraining large models on databases containing thousands or more protein families. Unlike in NLP, there remains considerable debate about if these models are succeeding and if it is worth continuing to develop them further, a question also raised by Reviewer 1. Our paper contributes to this broader question in protein representation learning in four major ways:\\n\\n1. We show that attention can be linked to Potts models using purely biological assumptions about protein data, and provide evidence that these assumptions are borne out in protein structural data. \\n2. Empirical evaluations of factored attention\\u2019s performance show that these assumptions lead to competitive performance with Potts models, and show that they lead to increased parameter-efficiency on long families. We also present the ability to tie value matrices across all families as evidence that multifamily hierarchical structure is readily accessible even to single attention layers.\\n3. We show that ProtBERT-BFD can learn contacts competitively with Potts models over a wide range of protein families. This builds on recent work from Vig et al, but the work of Vig et al does not carefully compare contact prediction metrics with an optimized Potts model implementation. We believe this is an encouraging result that suggests pretraining Transformers on proteins merits further work.\\n4. We contrast the performance and parameter efficiency of ProtBERT-BFD and factored attention to suggest the existence of even richer unrecognized hierarchical structure exploited by pretrained Transformers and not leveraged by either Potts, factored attention, or single-layer attention. \\n\\nThe goal of our paper is to reframe the discussion around pretraining attention-based models on protein sequences. Critiques of pretraining focus on whether pretraining at scale is effective compared to existing state-of-the-art techniques. We believe our contributions indicate that there is ample room for combining both approaches by building novel hierarchical models of protein family databases. We also take the position that such hierarchical models will heavily involve attention, possibly modified attention mechanisms designed specifically for protein data.\"}",
"{\"title\": \"Too basic and lacks compelling use case\", \"review\": \"This manuscript describes a connection between Potts models and attention as implemented in modern transformers. The authors then present an attention model in which positional encodings are defined as one-hot vectors indicating fixed positions in the multiple sequence alignment and train single layer attention models. These models, unsurprisingly, perform similarly to Potts models without APC correction for contact prediction. The methods section is somewhat confusingly written. I think the factored attention model would benefit from being described on it\\u2019s own terms rather than in connection with typical multiheaded attention, especially because the isolation of position encodings and amino acids at those positions dramatically simplifies the understanding of W_Q, W_K, and W_V. The authors also spend a long time describing well known methods, but without providing additional insight. The connection between the Potts model and attention described in this paper should be obvious to those who already understand attention models and Potts models and the empirical results of the factored attention model don\\u2019t make this approach seem compelling. In the discussion, the authors make several broad future speculations. Some of these would be interesting contributions and I encourage the authors to develop this work further. Maybe factored attention could be promising for better capturing dependencies between positions for deeper transformers on MSAs, but it isn\\u2019t likely that this work will be of broad interest to the machine learning community. This manuscript seems better suited to a workshop or other specialized venue. Some specific comments on this work follow below.\\n1.\\tIn the factored attention model, the authors use one-hot encoding of the position index as the position encoding. This is equivalent to learned position embeddings as in BERT which is worth mentioning. \\n2.\\tThe authors discuss single-site potentials as a difference between Potts models and single layer attention models and then show a comparison of attention models with and without single-site potentials showing little difference. However, attention models already implicitly have single-site potentials which arise from the positional encoding input features. Granted, this is not the case for the factored attention model where single-site potentials seem to have more effect, though in the negative direction.\\n3.\\tThe authors state that \\u201cThe ability of factored attention to capture similar contacts to Potts without use of APC suggest that it may be more suitable for protein design.\\u201d I don\\u2019t follow this conclusion. If the factored attention model performs equivalently to the Potts model alone and worse than the Potts model with APC correction, why would it be more suitable for protein design?\\n4.\\t What makes the single-layer attention or factored attention models compelling for protein modeling? What problems do these models solve that are not better solved by the Potts model or traditional transformers?\", \"what_would_raise_my_score\": \"1.\\tPresent a compelling use case for the factored attention model. What questions can be answered (or better answered) with this model over the Potts model or other alternatives? One idea is to use the factored attention model as the layers in a full deep transformer model and see if this architecture can improve tasks where MSA training data is available.\", \"edit\": \"I have increased my score in light of the response and manuscript edits. The manuscript is improved, but I think the method still needs more development. There are a number of interesting pieces but the final picture of an improved protein model is not fully resolved.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Official Review\", \"review\": \"Summary:\\nThis paper explores the connection between the classic Potts model-based approaches and modern Transformer-based approaches for protein contact map prediction. To this end, the authors introduce a simplified variation of the attention layer called factored attention, and show that a single layer of factored attention performs operations similar to those performed by the Potts model-based methods.\", \"pros\": [\"The paper attempts to connect classic and modern approaches to protein contact map prediction, which might be interesting to the people working in this field. The evidence presented (simplifying attention layer so that the equations look similar to the classic methods, numerical results of the simplified attention layer close to the classic methods) is reasonably convincing.\", \"The topic of the paper is quite timely, there has been a lot of interest recently in modelling proteins using the latest NLP techniques.\", \"The paper is well written. I appreciate the effort put in by the authors to define basic protein terminologies which might not be obvious to readers without biology background.\"], \"cons\": [\"The contributions of the paper would have been more interesting if the proposed modifications of the attention layer led to increased prediction performance of models which are representative of the state-of-the-art. Specifically, if retraining ProtBERT-BFD using the modified attention layer led to further improvement in performance, that would have been a solid contribution.\", \"Are MRF models really that competitive for contact map prediction? From what I understand, deep neural networks have been far better at this task for quite some time now. At multiple places in the paper, the authors give the impression that MRF models are close to state-of-the-art.\", \"In the last paragraph of the introductory section, the idea of encoding the MSAs is introduced which seemed interesting. However, from what I understood from the rest of the paper, the queries and keys are extracted solely based on the position of the amino acid. Is that right? If so, does the position correspond to the position in the sequence or in the MSA? Are the actual alignments used in any of the results in the paper? Please clarify.\"], \"comments\": [\"Section 3.1: \\\"each edge\\\" should have a capital e.\", \"Section 3.3, specifically the part where you show that factored attention is a pairwise MRF, is too brief. Given that this is a main contribution of the paper, it would be worthwhile to explain this connection in a more detailed manner.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Very interesting work but lack of some clarifications\", \"review\": \"Recently, some researchers tried to apply attention models into the protein field, using self-supervised learning to predict protein contacts. In this work, the author attempt to build the connection between such works and the old-school model, Potts model. By simplifying some operations within the attention model, the author managed to build an analog between the simplified model and the Potts model. The analog is intuitive and easy-to-understand. The authors further compare the simplified model and the Potts model on 748 protein families, showing that they are similar. Or probably the simplified attention model is even better. This is an interesting work. However, I also have a number of concerns. The advantages and disadvantages are listed below.\", \"pros\": \"1. The manuscript is concise and easy-to-understand.\\n2. The idea is intuitive and reasonable, with experimental support.\", \"cons\": \"1. The analog between the simplified attention model and the Potts model is intuitive but not rigorous. The authors claim that they provide a theoretical connection between the two models. However, that part is not strong enough, without proof.\\n2. There are two assumptions in this work, which make the simplified model different from the attention models that the previous researchers used. Firstly, they train the model on multiple sequence alignment instead of the raw sequences. If they train the model on the raw sequences, the performance is unacceptable, as shown in Figure 16, which is consistent with the previous research. Secondly, they removed the sequence embedding in queries and keys. This simplification makes the model only consider the statistical pattern in the MSA. To me, this one is a too strong assumption. \\n3. The running time and hardware comparison is missing. If the single layer of attention is comparable to the Potts model, not outperform it significantly, while it would take much more time to train, the researchers would need to think twice if they want to use the attention model. \\n4. The ablation study makes me feel that the results are on the opposite of the conclusion. Here is my logic. With the above two assumptions, the attention model can achieve similar performance as the Potts model, or a little bit better. However, when we train on the unaligned sequences, which is the usual case that we would use the attention model, the performance becomes unacceptable. Then why we want to use the more expensive attention model? The attention model in the NLP field is a different story. Those models are refreshing the STOA performance all the time. However, in the protein field, the attention model can still only achieve comparable performance as the classific models, after a two-year study. They seldom outperform classic algorithms. The results in this manuscript are consistent with the previous research. So I am not convinced regarding the conclusion in the abstract:\\n\\\"Taken together, these results provide motivation for training Transformers on large protein datasets.\\\"\\n5. The potential audience of this paper would be those who are specialized or interested in bioinformatics and protein.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Overall good paper about the relationship between Transformers and Potts models\", \"review\": \"Summary\\n=======\\nTransformers models have been recently shown to capture protein contact information in their attention maps when trained unsupervised of millions of protein sequences. This paper draws parallels between Transformers and Potts models (fully-connected pairwise MRF)--the current standard approach for protein contact prediction--and shows empirically that Transformers are competitive with Potts models. Understanding the differences and similarities between Transformers and Potts models makes Transformers less of a \\u2018black-box\\u2019 and helps to establish them as a principled method for contact prediction. The paper is clearly written and the evaluation is solid. I have only a few comments.\\n\\n\\nMajor comments\\n=============\\n1. What is the maximum sequence similarity between the training sequence of ProtBERT and sequences in TrRosetts alignments that were used for testing? Sequences must not overlap have a maximum similarity of let\\u2019s say 80%.\\n\\n2. You describe that you used three sets of families from the TrRosetta dataset (A.4.1). Why did you use only 732 families for testing (set 3)? Were these all families that were not included in the first two sets? How many families do the first two sets include and how similar are families of different sets? Ideally, train, tune, and test families belong to different super families.\\n\\n3. You describe in section A.3 how you extracted protein contact maps from the attention maps of ProtBERT. This is an important detail that must be described in the main text. How did you choose the 6 heads? Did you choose them manually or, for example, by training a linear model to predict contacts from attention maps and using the weights for identifying important heads, or computing the weighted average of attention maps?\\n\\n\\nMinor comments\\n=============\\n4. Section 3.2, \\u2018x = E_seq(x_i) + E_pos(i)\\u2019: How did you compute positional embeddings and why do and add embeddings instead of concatenating them?\\n\\n5. Section 3.2, \\u2018We treat the positional embedding E_pos as an overall summary per-position information\\u2019. Please describe more clearly what this summary is.\\n\\n6. Section 4, first paragraph: The L of the precision at L metric is not the sequence length but the number of top sequences. You describe L as being both. \\n\\n7. Figure 6 is not discussed. Instead of showing this figure, I suggest quantifying the correlation depending on the number of heads by computing and discussing the Spearman correlation.\\n\\n8. Rives et al 2020 \\u2018Biological structure and function emerge\\u2026\\u2019 have recently shown in addition to Vig et al that protein contact can be predicted from attention maps, which must be also pointed out in the \\u2018Background\\u2019 section.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
DE0MSwKv32y | Trust, but verify: model-based exploration in sparse reward environments | [
"Konrad Czechowski",
"Tomasz Odrzygóźdź",
"Michał Izworski",
"Marek Zbysiński",
"Łukasz Kuciński",
"Piotr Miłoś"
] | We propose $\textit{trust-but-verify}$ (TBV) mechanism, a new method which uses model uncertainty estimates to guide exploration. The mechanism augments graph search planning algorithms by the capacity to deal with learned model's imperfections. We identify certain type of frequent model errors, which we dub $\textit{false loops}$, and which are particularly dangerous for graph search algorithms in discrete environments. These errors impose falsely pessimistic expectations and thus hinder exploration. We confirm this experimentally and show that TBV can effectively alleviate them. TBV combined with MCTS or Best First Search forms an effective model-based reinforcement learning solution, which is able to robustly solve sparse reward problems. | [
"reinforcement learning",
"model-based",
"exploration",
"on-line planning",
"imperfect environment model"
] | Reject | https://openreview.net/pdf?id=DE0MSwKv32y | https://openreview.net/forum?id=DE0MSwKv32y | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"2OG8rGOVjB",
"u09GJAWGzG",
"H_YmCztnVvz",
"B886q-c_X2K",
"6Sq6Npq2CYt",
"ytdyrcImd0L",
"AdWpIwRGmV",
"nAKAovmTOnF",
"l5v-joks3-l"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040354231,
1606305430371,
1606305344102,
1606305278974,
1606305236484,
1603994109053,
1603925718116,
1603862478895,
1603858500571
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3396/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3396/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3396/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3396/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3396/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3396/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3396/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3396/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"After reading the meta-reviews and the authors comment, the meta-reviewer thinks the paper is not ready for publication in a high-impact conference like ICLR. The paper is not well positioned with respect to the literature, and the proposed techniques are not well discussed in relation with predominant paradigms like optimism in the face of uncertainty.\"}",
"{\"title\": \"Answer to AnonReviewer3\", \"comment\": \"Thank you for the review. We submitted the revised version of the paper with an improved literature overview (Section 2) and provided a statistical derivation of the underlying method (Section 3.2). The detailed answer follows.\\n\\nWe admit that we did not put enough emphasis on the theoretical side of TBV, hence making the impression that the method is an ad hoc heuristic rule. In fact, however, it is rather closely related to UCB and statistical hypothesis testing. Before we state how, let us make two remarks:\\nThe state-of-the-art planners, given a perfect model, have several mechanisms to balance exploration and exploitation, leveraging the achievements of Multi-armed Bandit theory and Reinforcement Learning (value function estimation). For instance, the implementation of MCTS used in our paper follows [13], where the in-tree exploration mechanism applies a version of upper confidence bound exploration taking the standard deviation of value ensemble predictions to measure uncertainty in estimates. This can be seen as a variant of UCB [1], UCB-V [14], or log-exp method [15]; UCT [2] being UCB applied in the tree search context). We have empirically verified that this approach performs the best among multiple choices, some of which were just mentioned. The way we train the value function ensemble follows [7], hence it can also be viewed through the Bayesian lens, similarly to Thompson Sampling. This is, however, the mechanics of the planner. \\nWe focus on jointly training the pair model-planner. This is an interesting task since directly trusting the planner will fail (due to model errors) and focusing on the state-space exploration to improve the model (similarly to [11] or [16]) will slow or hinder the learning of the value function. Consequently, a balance has to be struck. \\n\\nComing back to TBV, we notice that the planner itself cannot distinguish between a perfect or imperfect model (at least without an appropriate mechanism). If the model is learned, it is almost impossible to avoid errors, and over-relying on the planner can lead to suboptimal actions, which then can lead to propagation of errors in the value function estimates. Having recognized that problem, we utilize a statistical hypothesis testing framework, to switch between using the planner\\u2019s exploration and a state-space exploration aiming to improve the model. The test is based on the prediction error distribution computed using the model ensemble. Such a definition is robust to the unknown scale of prediction error, as well as automates setting the threshold. Since the approach uses statistical hypothesis testing, it has a nice connection with confidence bound methods.\\n\\nRegarding the choice of environments, we would like to point out that TMR is a known testbed for exploration [19], and Towers of Hanoi also pose a combinatorially challenging [20]. We have demonstrated that an off-the-shelf application of planning with a learned model can fail dramatically (in the Tower of Hanoi for 7 discs, without using TBV we almost never could find a solution, see Figure 4). \\n\\n[13] Milos et al., Uncertainty-sensitive learning and planning with ensembles, 2019. \\n\\n[14] Audibert et al., Tuning bandit algorithms in stochastic environments, 2007.\\n\\n[15] Lowrey et al., Planonline, learn offline: Efficient learning and exploration via model-based control, 2019.\\n\\n[16] Sekar et al., Planning to explore via self-supervised world models, 2020\\n\\n[17] Lee et al., Sunrise: A simple unified framework for ensemble learning in deep reinforcement learning, 2020.\\n\\n[18] Kumar et al., Discor: Corrective feedback in reinforcement learning via distribution correction, 2020.\\n\\n[19] Guo et al., Efficient exploration with self-imitation learning via trajectory-conditioned policy, 2019.\\n\\n[20] Pierrot et al., Compositional Neural Programs with Recursive Tree Search and Planning, 2019.\"}",
"{\"title\": \"Answer to AnonReviewer4\", \"comment\": \"Thank you for the review. We submitted an overhauled version of the paper, taking into account the aforementioned concerns. In particular we added the conclusion section, more references (Section 2), and described the formalism for the method (Section 3.2).\\n\\nDesign of TBV does not rely heavily on discrete environments but we expect it would bring the most value for cases where false-loops are presented. For continuous domains such errors are likely to occur when discrete latent representation of observation is learned (for example Hafner et al 2020 found that such a latent worked the best for their model-based RL approach on Atari)\\n \\n \\n[1] Hafner, et al. Mastering Atari with Discrete World Models, 2020\"}",
"{\"title\": \"Answer to AnonReviewer2\", \"comment\": \"Thank you for the review. We have submitted a new version of the paper with several improvements. In particular we added more references and described the formalism for the method (Section 3.2). The answers to your questions can be found below:\\nAd 1. We found out that the choice of threshold for RANDOM does not significantly changes the results, provided that it is 0.5 or less (see Appendix A.5).\\nAd 2. In our experiments majority (but not all) of the false transitions leading to plausible states were one-step false-loops. We expect that TBV helps with other types of errors (including multistep false-loops), since the agent replans at every step on the environment. This is indirectly confirmed by our experiments - in almost all cases agents with TBV were able to achieve a solution (in given time step limits).\\nAd 3. This is an interesting idea. For both domains on which we conducted experiments we found that the agent performance is robust to the choice of QR (see Appendix A.4), but it is likely that for other environments such a mechanism could improve the algorithm. On the downside, this introduces additional hyperparameters which may require tuning for different problems separately.\"}",
"{\"title\": \"Answer to AnonReviewer1\", \"comment\": \"Thank you for the review. We have submitted a new version of the paper with an improved presentation. In particular, we have added the pseudocode for BestFS (Appendix A.1), which should facilitate reading and make the text more self-contained. We have also expanded the related work section (Section 2), described the underlying formalism for our method (Section 3.2), and provided additional experiments (see Figure 6 in the main paper and Section A.5 in the Appendix). Below are more detailed answers:\\n\\nWe have added pseudocodes of MCTS and BestFS to the Appendix A.1, where the choose_action() method is included.\\nThe heuristic function for best-first search is indeed the disagreement measure. Before a solution to a given problem is found, there are no rewards given to the agent and the only possible strategy is to explore the state-space. The best node in the subgraph searched so far is the one which has the highest maximal disagreement (for each node we take maximal disagreement over the actions, computed the same way as STATE_SCORE in Algorithm 2). This is included in BestFS pseudocode added to this revision.\\n\\nWe believe that model-based RL and search algorithms are important areas, which already led to spectacular results (see e.g. [1], [2]) but also have a great potential for further development. \\n\\nWe aim to simultaneously learn the model and the planner. This requires the balance in state-space exploration (to improve the model) and planner exploration (to also improve the value function). \\n\\nChoosing action using epsilon-greedy equals overriding (with epsilon probability) the action proposed by the planner and replacing it with a random action (sampled uniformly).\\n\\nWe discuss Pathak et al. 2017 and Sekar et al. 2020 in Section 2. As to the experiments we compared our work to RND which is somewhat similar to Pathak et al 2017.\\n \\n \\n[1] Silver, David, et al. \\\"Mastering chess and shogi by self-play with a general reinforcement learning algorithm.\\\" arXiv preprint arXiv:1712.01815 (2017). \\n[2] Nagabandi et al. Deep dynamics models for learning dexterous manipulation, 2020.\\n[3] Sekar, et al. Planning to explore via self-supervised world models, 2020.\"}",
"{\"title\": \"A method for artificial curiosity using model uncertainty\", \"review\": \"The authors present a method to guide exploration that prefers to go to areas of the state space for which it is more uncertain. This uncertainty is obtained by measuring the standard deviation of the next state prediction from an ensemble of models. The authors call this the disagreement measure At each step, a search is performed and the disagreement measure is obtained for each state visited. The disagreement measure for each action is compared to the distribution for all the states visited during the search. It it is above some threshold, then the action that maximizes the disagreement measure is taken. Otherwise, it takes the action determined by the search.\\n\\nThe algorithm presented was unclear. What does planner.choose_action do? Is the heuristic for best-first search (BFS) the disagreement measure? I don't understand how this should help the algorithm pick a good action to take. The paper says, \\\"The proposed action is the first edge on the shortest path to the best node in the subgraph searched so far.\\\" How is the best node determined?\\n\\nFurthermore, is this search necessary? It seems like it is mainly used as a comparison for the disagreement measure. What if the agent behaved greedily with respect to the disagreement measure all the time. Pathak et al. (2017) used a similar method, but with an inverse model.\\n\\nI am not quite sure about the comparisons the authors are making. In the case of BFS search, what does it mean to do BFS search with epsilon greedy? Also, this is an artificial curiosity method where curiosity is measured by the disagreement between the ensemble of models. However, there are no comparisons to other curiosity papers, such as Pathak et al. (2017).\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting and well-written paper on model-based exploration\", \"review\": \"This work tackled the problem of model-based RL in environments where the reward is sparse and many actions are needed to achieve some. Particularly, the authors tried to solve the issue of one-step false loop in the model, which avoids further exploration. Measuring the uncertainty about the built model through ensemble of models, they added a possibility of choosing an action different from what planner suggests, promoting exploration. The work is very-well written in general, especially sections of problem definition and related work. I also appreciate that the proposed method is compared with multiple planners and tested on two different tasks. Having said that, the main missing analysis for me is that the method was not tested on environments where false loop does not exist. Given the nature of the problem definition, i.e. learning the environment, it is counter-intuitive not to test the method on a few standard test-benchmarks without any assumption. The proposed method does not have to get the best result on environments without false loop, but it is important to see how it behaves when the built model is already good. Other than this, I have a few more questions/concerns:\\n\\n1) The RANDOM parameter seems a little strange, especially because it looks too high, i.e. .5. Some analysis on different values of this (or just with and without RANDOM) on performance would be great. Also, I suspect that change in RANDOM would also change the best QR. I think a plot similar to figure 7, but for RANDOM and combination of RANDOM and QR would improve the paper.\\n\\n2) The method is about one-step false loop. I would appreciate if the authors talk about multistep false loop briefly. Could it be problematic in learning? If yes, could an extension of this method work?\\n\\n3) Have the authors considered a QR that changes with number of steps?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Trust, but verify: model-based exploration in sparse reward environments\", \"review\": \"This paper presents a new method which can be combined with graph search algorithms to boost exploration when the uncertainty is high. This new mechanism, called TBV, can override actions given by the model to explore and verify model predictions. It is also shown in the experiments that TBV improves the model performance when combined with MCTS or BestFS. TBV utilizes graph structure of the problem and finds the solution much quicker for both MCTS and BestFS.\\n\\nWhile the presented method is interesting with high performance, I found many editorial errors in the writing. For example, in the second paragraph of section 3.3, \\u2018we concentrated of exploration\\u2026.\\u2019, and \\u2018In our experiments, such a version proved to be effective in in discrete\\u2026\\u2019, just to name a few. There are so many errors like this and the paper needs serious rewriting. Also, having a conclusion or discussion can help the structure of the paper. \\n\\nFigure 3 is unclear if the blue line is without TBV with the legend \\u2018Right corridor visited\\u2019. It could be interesting to discuss extension of TBV into continuous environments. Reference format seems to have errors since there are underlines.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Improperly positioned, minor technical contribution, unclearly written\", \"review\": \"This paper proposes an approach for encouraging exploration when planning over learned models of discrete reinforcement learning environment. The proposed method involves using an uncertainty-aware model (e.g., an ensemble of neural networks) to predict state-action transitions, together with a graph-based planner operating on this model. The key idea is to replace (with some probability) the planner's action with the action leading to the highest uncertainty in model prediction. The paper evaluates the proposed technique using two standard search planners (MCTS and BFS).\\n\\nUnfortunately, I think the significance and technical contribution of this work is minimal, an issue that mostly likely starts from a deficient literature review. What the authors refer to as Trust-But-Verify, it's just an ad-hoc instance of the well-known principle of *optimism in the face of uncertainty*, which underlies classic bandit and RL algorithms such as UCB1[1], UCT[2], Thompson Sampling [3, 4]. In the model-free setting, this idea has lead to numerous recent algorithms, many of which also use ensembles for uncertainty quantification [5-8]. In the model-based setting there are also some precedents of work using similar ideas [9-11]. There is also a large body of work treating the problem from the point of view of Bayesian RL (see [12] for a survey). It is a bad sign that none of this body of previous work was discussed in the paper, which I would argue was the more relevant literature upon which the paper had to be positioned.\\n\\nThis could conceivably be excused if the paper technical and experimental contribution was impressive enough, but this is not the case. In contrast to the literature outlined above (where proposed exploration strategies typically follow from rigorous statistical analysis), this work presents the proposed method as a heuristic rule, providing no insight as to why one should expect the approach to work well in general. Moreover, the experiments are done in relatively simple domain, and compared against simple baselines. Some of the baseline choices do not seem appropriate either. For example, why use $\\\\epsilon$-greedy for exploration, instead of more robust strategies using upper confidence bounds? \\n\\nFinally, the writing on the paper can be improved in many places. For example, the paper refers to using the graph structure of the underlying problem, but what this graph structure refers to is not properly defined anywhere in the paper. I imagine it refers to the graph wherein edges represent non-zero probability transitions between states, but this is not clear from the text. Additionally, some paragraphs add little in terms of content. For example, the first paragraph of Section 3 is devoted to describe the basic problem that all model-based RL methods are trying to solve; this issue is ubiquitous so there is no need for a full example and so much text to describe this. Other sentences are unclear, such as \\\"The optimistic and pessimistic errors are often of the same nature\\\", which I am not sure what is referring to . Additionally, I didn't see a description of the learned model used in the experiments, which is not an obvious choice, since the environment is discrete. \\n\\nOverall, to end on a somewhat constructive note, I think the problem the authors are trying to solve is interesting and the proposed approach is based on the right intuitions. However, this work is still too immature for publication. I suggest to the authors to position their work properly with regards to the relevant literature, refine their technical contribution accordingly, and compare with more appropriate baselines. \\n\\n----------------------------------------------------------------------\\n[1] Auer, Peter, Nicolo Cesa-Bianchi, and Paul Fischer. \\\"Finite-time analysis of the multiarmed bandit problem.\\\" Machine learning 47.2-3 (2002): 235-256.\\n\\n[2] Kocsis, Levente, and Csaba Szepesv\\u00e1ri. \\\"Bandit based monte-carlo planning.\\\" European conference on machine learning. Springer, Berlin, Heidelberg, 2006.\\n\\n[3] Thompson, William R. \\\"On the likelihood that one unknown probability exceeds another in view of the evidence of two samples.\\\" Biometrika 25.3/4 (1933): 285-294.\\n\\n[4] Russo, Daniel, et al. \\\"A tutorial on thompson sampling.\\\" arXiv preprint arXiv:1707.02038 (2017).\\n\\n[5] Bellemare, M., Srinivasan, S., Ostrovski, G., Schaul, T., Saxton, D., & Munos, R. (2016). Unifying count-based exploration and intrinsic motivation. In Advances in neural information processing systems (pp. 1471-1479).\\n\\n[6] Ostrovski, G., Bellemare, M. G., Oord, A., & Munos, R. (2017, July). Count-Based Exploration with Neural Density Models. In International Conference on Machine Learning (pp. 2721-2730).\\n\\n[7] Osband, I., Blundell, C., Pritzel, A., & Van Roy, B. (2016). Deep exploration via bootstrapped DQN. In Advances in neural information processing systems (pp. 4026-4034).\\n\\n[8] Fortunato, M., Azar, M. G., Piot, B., Menick, J., Osband, I., Graves, A., ... & Blundell, C. (2017). Noisy networks for exploration. arXiv preprint arXiv:1706.10295.\\n\\n[9] Sanner, S., Goetschalckx, R., Driessens, K., & Shani, G. (2009). Bayesian real-time dynamic programming. In Proceedings of the 21st International Joint Conference on Artificial Intelligence (IJCAI-09) (pp. 1784-1789). IJCAI-INT JOINT CONF ARTIF INTELL.\\n\\n[10] Pathak, Deepak, Dhiraj Gandhi, and Abhinav Gupta. \\\"Self-supervised exploration via disagreement.\\\" arXiv preprint arXiv:1906.04161 (2019).\\n\\n[11] Shyam, Pranav, Wojciech Ja\\u015bkowski, and Faustino Gomez. \\\"Model-based active exploration.\\\" International Conference on Machine Learning. 2019.\\n\\n[12] Ghavamzadeh, M., Mannor, S., Pineau, J., & Tamar, A. (2016). Bayesian reinforcement learning: A survey. arXiv preprint arXiv:1609.04436.\", \"rating\": \"2: Strong rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
aYbCpFNnHdh | Visual Question Answering From Another Perspective: CLEVR Mental Rotation Tests | [
"Christopher Beckham",
"Martin Weiss",
"Florian Golemo",
"Sina Honari",
"Derek Nowrouzezahrai",
"Christopher Pal"
] | Different types of \emph{mental rotation tests} have been used extensively in psychology to understand human visual reasoning and perception. Understanding what an object or visual scene would look like from another viewpoint is a challenging problem that is made even harder if it must be performed from a single image. 3D computer vision has a long history of examining related problems. However, often what one is most interested in is the answer to a relatively simple question posed in another visual frame of reference -- as opposed to creating a full 3D reconstruction.
Mental rotations tests can also manifest as consequential questions in the real world such as: does the pedestrian that I see, see the car that I am driving?
We explore a controlled setting whereby questions are posed about the properties of a scene if the scene were observed from another viewpoint. To do this we have created a new version of the CLEVR VQA problem setup and dataset that we call CLEVR Mental Rotation Tests or CLEVR-MRT, where the goal is to answer questions about the original CLEVR viewpoint given a single image obtained from a different viewpoint of the same scene. Using CLEVR Mental Rotation Tests we examine standard state of the art methods, show how they fall short, then explore novel neural architectures that involve inferring representations encoded as feature volumes describing a scene. Our new methods use rigid transformations of feature volumes conditioned on the viewpoint camera. We examine the efficacy of different model variants through performing a rigorous ablation study. Furthermore, we examine the use of contrastive learning to infer a volumetric encoder in a self-supervised manner and find that this approach yields the best results of our study using CLEVR-MRT. | [
"vqa",
"clevr",
"contrastive learning",
"3d",
"inverse graphics"
] | Reject | https://openreview.net/pdf?id=aYbCpFNnHdh | https://openreview.net/forum?id=aYbCpFNnHdh | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"-4qriMDLDij",
"yW6le3mwwKw",
"2QfZ2mUMGkE",
"gEyxG-hD3z_",
"VQz6Yxu_U3y",
"qBI7mrLc-hR",
"bq4AkdS-At0",
"hOBLSfDHjbj",
"_BrD2XsKL1h",
"P-OFBR52the",
"dIecMfK5HH",
"HpiDQBdAgH5"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040498983,
1605574158731,
1605573953363,
1605573673121,
1605573533911,
1605573112895,
1605572784898,
1605572686157,
1603904405721,
1603899031072,
1603847211673,
1603808933164
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3392/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3392/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3392/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3392/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3392/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3392/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3392/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3392/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3392/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3392/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3392/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This paper was reviewed by 4 experts in the field. The reviewers raised their concerns on lack of novelty, unconvincing experiment, and the presentation of this paper, While the paper clearly has merit, the decision is not to recommend acceptance. The authors are encouraged to consider the reviewers' comments when revising the paper for submission elsewhere.\"}",
"{\"title\": \"Response\", \"comment\": \"**The self supervised learning of 3D volumes is an interesting idea, but it's use case in this particular problem is very weakly motivated both in experiments and theory. Why is this method better than the method discussed in Section 2.2.1? What is 3D data augmentation and how is it different from 2D data augmentation?**\\n\\nThe contrastive experiments is motivated by the fact that (1) we can learn from scratch a volumetric encoder from 2D images without performing any re-rendering tasks or assuming camera knowledge (n.b: while we leverage viewpoint camera in the actual FILM stage, it is not used at all during contrastive encoder pre-training); and (2) it addresses the issue where some of the experiments in Table 1 had slightly inflated variances, which we strongly conjecture is simply due to the fact that the domain mismatch between the pre-trained ImageNet encoder and CLEVR-MRT. Instead, here we pre-train an encoder on the same dataset that we train 3D FILM on, and the variances indicated in Table 2 support this conjecture.\\n\\nThe differences between {2D,3D,2D+3D} augmentation is explained in Section 2.2.3. To re-iterate, we make a distinction between scenes and images: each scene comprises many images (views). For the sake of simplicity let us omit minibatches, and just consider individual examples. At each iteration of training, let S1 and S2 denote two random sampled scenes, and we sample x1, x2 ~ S1 (two views from S1) and y ~ S2 (one view from scene 2):\\n- 2D data augmentation means: pull T(x1) and T(x1) close together**, push T(x1) and T(y) far apart. Note that x2 is never used here.\\n- 3D data augmentation means: pull x1 and x2 close together, push x1 and y far apart. Note that here we do not use a stochastic augmentation function T -- the only stochasticity with respect to S1 is in sampling different views x1, x2 ~ S1\\n- 2D + 3D: pull T(x1) and T(x2) close together, push T(x1) and T(y) far apart. This is just 3D data augmentation but with T() added back in.\\n\\nTraining with 3D data augmentation essentially teaches the contrastive encoder how to distinguish between *scenes*, i.e. it can detect whether some random image pair (x,y) belongs to the same scene or not, which imbues such an encoder with strong 3D reasoning properties. You do not get this with 2D-only because the contrastive loss is never trained to pull the encodings of x1 and x2 together. However, we found that if you combine this with 2D data augmentation, the contrastive encoder learns sufficiently good features that it can be used in the FILM stage to achieve our best result (87%).\\n\\nTo the best of our knowledge, the use of a volumetric contrastive encoder whose latent volumes can be subjected to rigid transformations in a downstream task (VQA) is novel.\\n\\n**This is not a typo, T() is a stochastic function so T applied to the same image twice (T(x1) and T(x1)) does not necessarily give two identical images!\\n\\n**\\\"There is a large variance in some experiments in Table 1. Is it due to the camera transformation embedding? It will be good to discuss the reasons why this is in Table 1 and not in Table 2.\\\"**\\n\\nHi, see point (1) in the official comment at the top of this page.\"}",
"{\"title\": \"Response\", \"comment\": \"**\\\"Since there are no statistics about the newly introduced dataset, it is hard to judge the empirical results in the paper.\\\"**\\n\\nSection X discusses how many scenes/questions are for each split, the range at which azimuths are sampled for the camera, as well as stating what question types in the original CLEVR dataset were filtered out.\\n\\n**\\\"As for the CLEVR-MRT, even without any information about the viewpoints, the baseline models could achieve more than 70% accuracy on the proposed dataset. It seems that the dataset is too simple that the model could have good performance without knowing the camera parameters.\\\"**\\n\\nThanks for addressing this concern. Please see points (3) and (4) in the official comments on top of the page.\\n\\n**There are many views in the scene that provide the correct answer \\\"1\\\" for this question. Without restricting the variance of the camera view (as in [1]), how can we ensure the model to infer the correct viewpoint?**\\n\\nStrictly speaking, the objective of the paper is not to infer the \\u2018correct\\u2019 viewpoint. This is not to say that the model doesn\\u2019t do this, but rather it is not trained explicitly to do so like in the case of re-rendering. Rather than train the model to accurately re-render a new viewpoint, we are simply asking it to perform a sequence of 3D reasoning steps (i.e. the camera-conditioned transform on h followed by 3D conv FILM blocks) such that it is able to answer the question correctly.\\n\\nWe also address this in point (3) at the top of the page.\"}",
"{\"title\": \"Response\", \"comment\": \"**In particular, it would be nice if the paper could further explain how the current setup is better than solving the \\u201cview rendering\\u201d and VQA problems separately.**\\n\\nThank you for raising this point, we should clarify this in the paper. What we try to illustrate in our paper is that one can perform this style of VQA without ever having to explicitly perform a re-rendering via the training of a decoder network. For example, for either of the encoder networks proposed (the ImageNet encoder or the contrastive), it would be possible to train a decoder using reconstruction loss on ground truth images, e.g. in addition to the FILM model which conditions on h (after the postprocessor), another branch conditions on it to perform decoding/re-rendering. Not only is this much more computationally expensive however, it may even be to the detriment to the FILM model since now we have to carefully strike a balance between good VQA performance and good reconstruction.\\n\\nWe note that there is a distinction between wanting to see something from another point of view, versus wanting to answer a question from another point of view. The former is where re-rendering is appropriate, but we do not make the claim that this alternative (view rendering + VQA) performs better or worse empirically.\\n\\n**\\\"...however, there is no clear section in the main paper describing the details of how the dataset\\\"**\\n\\nDetails on how the dataset was generated are described at the end of Section 3 (though perhaps for clarity should be moved into its own subsection!).\\n\\n**Concerns about the clarity of writing and organisation:** thank you, we will address these concerns.\"}",
"{\"title\": \"Response\", \"comment\": \"**As far as I could understand, even without knowing which view to look at, a model could achieve 70% accuracy indicating that there is a lot of bias in the dataset. Moreover, simply adding camera embedding with the question to a 2D baseline (Table 1, 2D FILM with camera), already performs close to the best 3D model and upper bound. (Please clarify if my understanding is wrong.)**\\n\\nHi, yes but there are some caveats to this that are worth knowing. See points (1) and (3) in the official comments at the top of the page.\\n\\n**Similarly, it's hard to parse what the training signal for each baseline is. Does a baseline use the rendered image from the other view during training?**\\n\\nWe apologise for any confusion here. We will make this more clear as well as the figures. To explain:\\n\\nFor the \\u201cupper bound\\u201d canonical view baseline, during all phases (train/valid/test), the dataset used is one where only the canonical view exists for each image. So there is no random sampling of views going on, and it is essentially equivalent to vanilla CLEVR (though not exactly because vanilla CLEVR has questions that are invariant to camera viewpoint such as counting, and we removed those).\\n\\nFor all other experiments, during all phases (train/valid/test), each image in the minibatch is a randomly sampled view from a randomly sampled scene. Each scene contains 20 pre-generated camera views whose azimuths were sampled at random with a Uniform(-180, 180) distribution. Note that the canonical view (at azimuth=0) is an extra view (so there are actually 21 views), but for all experiments apart from the \\u201cupper bound canonical\\u201d one the training procedure pretends the canonical view does not exist (so we can pretend each scene has 20 views, not 21). However, because the 20 pre-generated camera views were sampled from a Uniform(-180, 180) it *can* be the case that by coincidence the network is given a view that is close enough to canonical in the sense that answering the question is relatively straightforward. To answer your question, only one input image is fed through the network and that is the viewpoint camera (not the canonical).\\n\\nFor each FILM experiment, the only inputs are:\\n- The viewpoint image\\n- The viewpoint camera. This is just a camera matrix wrt to world coordinates, with 6 values (3 denoting pose on x/y/z and 3 denoting translation on x/y/z).\\n- The question, posed with respect to the canonical viewpoint\\n\\nTherefore, the difference between experiments is really what the network does with the camera coordinates of the viewpoint camera In Table 1, we illustrate what the supervisory signal is with these columns:\\n- \\u201ccamera (embed)\\u201d means that we feed this camera through a trainable MLP to produce a camera embedding that is subsequently passed to the FILM blocks.\\n- \\u201ccamera (rotation)\\u201d means that we feed this camera through a trainable MLP which produces another camera matrix describing the *relative transform* between the current viewpoint and the canonical, and this is used to rotate/translate the feature volume before it is passed to FILM.\\n\\n**This could potentially be a useful scenario, however, the dataset proposed is different from the example as the camera viewpoint is provided as part of the input and not inferred from the question. The paper does not provide justification or evidence of how the current setup (i.e. with camera viewpoint) is useful.**\\n\\nWe address this concern (in part) in point (3) in the official comments.\\n\\nIndeed, it would be possible to run experiments on a new version of the dataset where the canonical viewpoint is described in the question, e.g. \\u201cHow many red cubes are there to the left of the green sphere *when I rotate my viewpoint by X degrees and translate by Y units?\\u201d, however this is just converting floats in a camera matrix to plain language and appending it to the question string. Converting floats to strings may be problematic however because that probably will not generalise well (i.e. does the RNN know the relationship between the floats represented as strings \\u201c1.54\\u201d and the string \\u201c1.56\\u201d? It would have to learn how to do arithmetic). The alternative is to separate the camera coordinates from the question, which is precisely what we are doing now. Furthermore, to reiterate point (3) in or official comment, it is not unreasonable for a camera rig to know where it is oriented in the world, and simply use the coordinates directly to make some sort of inference.\"}",
"{\"title\": \"Response\", \"comment\": \"**It is unclear what happened to the spatial-related questions. They were removed of the dataset?**\\n\\nDo you mean the non-spatial questions? This is at the end of Section 3: \\u201cTo focus on questions with viewpoint dependent answers, we filtered the set of questions to only include those containing spatial relationships (e.g. \\u2018is X to the right of Y\\u2019).\\u201d In other words, questions whose answers would be *invariant* to the viewpoint camera (e.g. how many green cubes are there) are not used in the dataset generation process.\\n\\n**Results are promising, although why do they have such high variance? (7-8% of variance is not negligible by any means); considering that for some experiments it is likely that 2D FILM provides similar performance than 3D one. A statistical test might help to verify whether such results are statistically significant or not.**\\n\\nThank you for addressing this. See the point (1) in the official comment at the top of this page.\\n\\n**I have mixed feelings in using such specific kind of [camera] information in the model, because in a real world scenario we don't have access to them.**\\n\\nSee point (2) in the official comment at top of page.\\n\\n**Another aspect: maybe authors made the task too easy and should have explored more challenging scenarios. Due to time constraints we were unable to re-run all experiments on any new version of the dataset prior to the submission deadline. However, we have now generated a slightly more complex version, see official comment.**\\n\\nSee point (4).\\n\\n**ResNet outputs... feature maps h of dimensions (1024,14, 14)\\\" Is this correct? I believe Resnet101 outputs (2048, 14, 14) feature maps.**\\n\\nThanks! We should have clarified that this is the ResNet-101 with the last \\u2018block\\u2019 chopped off (the PyTorch version is split into four \\u2018blocks\\u2019 each with 3, 4, 23, and 3 modules inside, respectively, so we remove those last 3 modules). Therefore this gives 1024 feature maps rather than 2048.\\n\\n**Is it possible to visualize and understand what the postproc module does? It would be nice to visually explain the h\\u2032 (64, 16, 14, 14) tensor represents.**\\n\\nSure.\\n\\nx -> [frozen resnet encoder] -> [postprocessor] -> [rotation] -> [FILM blocks]\\n\\nFor our \\u201c3D FILM + projection\\u201d architecture (Sec 2.2.1, Fig 4), a postprocessor is needed since we are piggybacking on top of an encoder that was pretrained on ImageNet (an image/2D dataset). Using PyTorch-like shape notation and omitting the batch axis, the output of the ResNet encoder, for an input of (3,224,224), is (1024, 14, 14), i.e. 1024 feature maps of spatial dimension 14x14. The first thing that happens is that this tensor gets \\u2018projected\\u2019 into 4D via a reshape operation, so (1024//16, 16, 14, 14) = (64, 16, 14, 14). In other words, we now have 64 feature \\u2018cubes\\u2019 of size 16x14x14. This then goes through multiple 3D conv blocks (conv3d-BN-relu) to produce the output feature cube of the same dimension (64,16,14,14).\\n\\n**This is to be expected, considering that any camera information that is forward-propagated will contribute gradients back to the postprocessing parameters in the backward propagation, effectively giving the postprocessor supervision in the form of camera extrinsics.\\\". Can authors support/prove this claim?**\\n\\nWe ran an extra ablation on our best \\u201c3D FILM, projection\\u201d experiment, which is the number you see in parentheses (with the \\u2020 symbol) underneath the bolded result in Table 1. In this ablation the postprocessor was left randomly initialised, i.e. its parameters were not updated during training. This means the postprocessor simply performs a random projection. This achieved 68.98% accuracy on the validation set. Reiterating our architecture for \\u201cFILM + projection\\u201d:\\n\\nx -> [frozen resnet encoder] -> [postprocessor] -> [rotation] -> [FILM blocks] -> prediction\\n\\nWhether we are using the camera coordinates to condition the [rotation] op, or passing the camera coordinates to the FILM blocks (not shown here, but see Fig 4), camera information is being used in the forward pass of the network, and that subsequently influences the gradients that are back-propagated. Because the postprocessor is not frozen, it also receives those gradients and therefore are updated based on that information.\\n\\n**in practice, we found \\u03c4=0.1 to produce the lowest softmax loss.\\\" Which ones you have tested? Why \\u03c4 is 1.0 in Table 2?**\\n\\nThanks for spotting this error, indeed it should be 0.1.\"}",
"{\"title\": \"Results on a version of CLEVR-MRT with more variability\", \"comment\": \"**(4) Concerns about the simplicity of the dataset (R4, R3, R2), part II:**\\n\\nWe generated a modified version of CLEVR-MRT that exhibits more variability than in the original dataset in order to further probe our proposed models and baselines. In the original dataset the only source of variability in the camera was in its azimuth, i.e. its orbit around the scene. However, this was at a fixed elevation of 30 degrees. In the modified version of the dataset, we also made the camera elevation stochastic (N.B: the canonical camera stays at the same elevation, and is unaffected), so that it is sampled from a Uniform(20,30). This means that in some cases the camera will be lower than it normally is. We also enabled small objects (the original dataset had only large-sized objects). Both of these aforementioned additions can potentially increase occlusion effects (i.e. making it difficult to answer questions), but it is a better reflection of a real-world setting. **While we have uploaded a new draft and posted new numbers in Table 3 as well as an example scene of the modified dataset in Figure 6, we want to stress that hyperparameter tuning for some of these experiments are still ongoing.**\\n\\nComparing Tables 1 and 3, whereas the 2D FILM baselines for {no camera, camera} were ~70% and ~84% respectively, they are now (in Table 3) ~67% and ~80%, respectively.\"}",
"{\"title\": \"Key concerns addressed by reviewers\", \"comment\": \"Hi,\\n\\nWe thank all reviewers for their highly detailed feedback and comments. We appreciate that reviewers found that the mental rotation VQA task was interesting and/or important (R3, R4, R1); that the contrastive pre-training was an interesting contribution (R3, R4); that the results are promising (R3, R4, R1); and that the text was clear (R1, R3). In addition to responding to specific reviewers about concerns, there were some key concerns shared by several reviewers, so we will address those here.\\n\\n**(1) Reviewers (R3, R1) noted that some of the results in Table 1 (namely the 3D FILM ones) had unusually high variance, bringing up concerns about statistical significance of the proposed methods.**\\n\\nWe should have noted in the original submission that the inflated variance is due to few runs running into local minima and plateauing at a relatively low accuracy. We note that the inflated variance only appears to be the case for when we use the ImageNet encoder with 3D feature transformations, and does not appear to happen for our results for when we use the contrastive pre-training procedure. It is highly likely that this is due to a domain \\u2018mismatch\\u2019 between ImageNet and CLEVR, because the ImageNet encoder remains frozen during training.\\n\\nFor example, if we take our best performing method in Table 1 (FILM + projection, using camera for rotation), find the best experiment with hyperparameter tuning and re-run that same experiment over 5 more seeds (for a total of 6 runs), the max validation set accuracy obtained by each is: **[0.93, 0.90, 0.93, 0.93, 0.90, 0.70]**. The mean/stdev of these numbers is 88.13 +/- 8.01, as reported in the original submission pdf. However, the variance has been severely inflated by the last run whose highest accuracy was 70%. In retrospect, we should have highlighted this phenomena in the discussion section, as well as account for these outliers when computing the mean and variance. Therefore, we have updated those results by computing the mean/stdev over the best 3 performing models on the validation set. We have updated the numbers in Table 1 to reflect this, which is reflected in the revised pdf.\\n\\n**(2) R3 and R4\\u2019s concern about camera information being provided to the model and its potential infeasibility in practice:**\\n\\nIn real world settings, camera rigs can and do have knowledge about where they are situated in the world, for instance using SLAM or GPS coordinates. In that case, it is not unreasonable for e.g. an autonomous vehicle to answer queries by performing rotations and/or translations of its current viewpoint. \\n\\n**(3) (R4, R3, R2) Concerns about the simplicity of the dataset, primarily motivated by the 2D no-camera baseline which achieves 70%, but also concerns about strong biases in the dataset.**\\n\\nWe appreciate that this concern was mentioned. It is not uncommon for VQA datasets to contain strong biases and this phenomena is not limited to CLEVR or its derivative datasets. It is an active field of research and is most commonly seen in the literature as language-based biases (see [1,2,3,4]) (i.e. the situation where VQA models may exploit statistical regularities in the question rather than considering what is in the image), although the source of bias is certainly not limited to just language. We argue that whether or not the no-camera baseline accuracy of 70% is problematic is highly dependent on the problem that we are attempting to solve. For example, in safety-critical applications such as autonomous vehicles or blind person navigation, 30% error rate is high enough to be very problematic in a production setting. Because removing sources of bias in a dataset can be virtually impractical (especially if it is obtained in a real world setting), we decided to simply make these biases known in our results, which is why we included in Table 1 the majority class baseline and trained a question-only (RNN-only) baseline. Lastly, after we excluded outlier experiments and computed the mean/stdev accuracy over the top 3 models (see point #1 in this comment ), our best result achieved 92.80 +/- 0.30. This is a rather considerable difference to 70%.\\n\\n- [4] Agrawal, A., Batra, D., Parikh, D., & Kembhavi, A. (2018). Don't just assume; look and answer: Overcoming priors for visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 4971-4980).\\n- [1] Ramakrishnan, S., Agrawal, A., & Lee, S. (2018). Overcoming language priors in visual question answering with adversarial regularization. In Advances in Neural Information Processing Systems (pp. 1541-1551).\\n- [2] KV, G., & Mittal, A. (2020). Reducing Language Biases in Visual Question Answering with Visually-Grounded Question Encoder. arXiv preprint arXiv:2007.06198.\\n- [3] Manjunatha, V., Saini, N., & Davis, L. S. (2019). Explicit bias discovery in visual question answering models. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 9562-9571).\"}",
"{\"title\": \"Official Review\", \"review\": [\"### Overall\", \"Authors extend CLEVR dataset so as to consider multiple viewpoints, and evaluate current neural network models in that setting. They also update a standard approach to introduce camera viewpoint information in the network so it can better answer visual question from the canonical scene frame even from other perspectives.\", \"### Positive aspects\", \"Authors provide a study on a important topic of Computer Vision: understanding multiple views of a same scene. They do such study on a hard task, which is VQA. Actually, authors provide a more complex version of a simple VQA dataset (*simple* because it is synthetic and has very well established domain limits).\", \"Authors evaluate different training frameworks (supervised and unsupervised from scratch).\", \"Authors provided accuracy values for pretraining with NCE, which can be helpful.\", \"Results seem to be promising.\", \"In general, text is well written and easy to read.\", \"It is interesting that even a frozen pretrained network provides good results in such visually different dataset. Although, it was nice that authors trained an encoder from scratch.\", \"Code already available!\", \"### Weak aspects and suggestions\", \"The problem is interesting, though my main concern is regarding the novelty and contribution of the paper. It seems to be an adaptation of CLEVR dataset, and an adaptation of the FILM model. In addition, authors use camera viewpoint information to ease the identification of the scenes. I have mixed feelings in using such specific kind of information in the model, because in a real world scenario we don't have access to them. I might be wrong, but maybe it is possible to insert a module in their approach to estimate the camera parameters, so as the network itself could learn to predict how viewpoints work and how scenes change with that. I think this could be done by adding such parameters as target information some of the models. For instance, the unsupervised architecture could be trained to predict whether the scenes are the same, but also the camera parameters. Apologies if I miss something here.\", \"The proposed architecture seems to be basically an adaptation of the FILM model considering camera viewpoint information.\", \"FILM (2018) is the best performing approach in CLEVER to date? There are more recent approaches that could be used in the results section as baselines.\", \"It is unclear what happened to the spatial-related questions. They were removed of the dataset?\", \"Results are promising, although why do they have such high variance? (7-8% of variance is not negligible by any means); considering that for some experiments it is likely that 2D FILM provides similar performance than 3D one. A statistical test might help to verify whether such results are statistically significant or not.\", \"Font size for all images should be quite larger. It is hard to read in the current size.\", \"Figure of the post processor does not help much. Authors could detail a little bit more what is inside that $postproc_w$ box.\", \"*\\\"Since the post-processor is a learnable module through which the FILM part of the pipeline is able to backpropagate through, it can be seen as learning an appropriate set of transforms that construct 3D feature volumes h0.\\\"* I suggest rewriting this sentence, it is very confusing.\", \"*\\\"While we obtained great results, it may not leave a lot of room to improve on top of our methods,\\\"* This sentence is odd. The sentence \\\"we obtained great results\\\" can be written in a more objective and scientific way (avoid the usage of adjectives). Another important aspect is: often it is easy to provide first large steps in a task (ImageNet for instance), although it gets much harder to improve on that when results are good (AlexNet vs ResNet, see the performance difference). Another aspect: maybe authors made the task too easy and should have explored more challenging scenarios.\", \"*\\\"and we identified some ways in which the dataset could be made more difficult\\\"* Those ideas to make the task more challenging are indeed important. Why authors did not perform experiments in such scenarios? It does not seem very hard to generate such datasets.\", \"Is it possible to visualize and understand what the postproc module does? It would be nice to visually explain the $h'$ (64, 16, 14, 14) tensor represents.\", \"There could be some qualitative analysis.\", \"The dataset extension seems to be a large portion of the work. I think it could have a separate section with more details.\", \"### Additional questions\", \"What happens if other conditioning camera information strategy is used? For instance, simply concatenating or using other simpler fusion techniques. FILM would perform much better than other simpler approaches?\", \"*\\\"ResNet outputs... feature maps h of dimensions (1024,14, 14)\\\"* Is this correct? I believe Resnet101 outputs (2048, 14, 14) feature maps.\", \"*\\\"in practice, we found $\\\\tau = 0.1$ to produce the lowest softmax loss.\\\"* Which ones you have tested? Why $\\\\tau$ is 1.0 in Table 2?\", \"*\\\"Another idea is to allow the viewpoint camera\\u2019s elevation to change. \\\"* That is true. Or even the distance from the camera. Why did authors decide not to include such examples in this work?\", \"*\\\"This is to be expected, considering that any camera information that is forward-propagated will contribute gradients back to the postprocessing parameters in the backward propagation, effectively giving the postprocessor supervision in the form of camera extrinsics.\\\"*. Can authors support/prove this claim?\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Official Review\", \"review\": \"The paper explores the problem of visual question answering from another perspective. Similar to VQA, a system is provided with a scene and a question. However, the difference is that the question needs to be answered from a viewpoint different from the one provided. Hence, the system needs to perform \\u201cmental rotation\\u201d. The paper creates a new dataset called CLEVR Mental Rotation Tests which is based on the prior CLEVR dataset. The paper also studies the efficacy of various supervised and self-supervised models on the proposed dataset.\\n\\n#### Strong points:\\n- The problem of asking questions related to \\u201cmental rotation\\u201d seems interesting.\\n- The paper shows that contrastive pre-training could be useful for the task, which is an interesting result.\\n\\n#### Weak points: \\n- Although the problem seems interesting, I am unclear about the usefulness of the proposed dataset. The paper says that \\u201cmany computer vision systems could benefit from neural architectures that demonstrate good performance for more targeted mental rotation tasks.\\u201d To justify this claim it gives the following example, \\u201cgiven the camera viewpoint of a (blind) person crossing the road, can we infer if each of the drivers of the cars at an intersection can see this blind person crossing the street?\\u201d. This could potentially be a useful scenario, however, the dataset proposed is different from the example as the camera viewpoint is provided as part of the input and not inferred from the question. The paper does not provide justification or evidence of how the current setup (i.e. with camera viewpoint) is useful. In particular, it would be nice if the paper could further explain how the current setup is better than solving the \\u201cview rendering\\u201d and VQA problems separately.\\n\\n- The dataset seems to be too simple for the mental rotation tests. It is unclear if in the future the dataset would be useful in distinguishing which models are better. As the paper shows that \\u201c2D baseline without camera conditioning\\u201d already achieves 70% accuracy. As far as I could understand, even without knowing which view to look at, a model could achieve 70% accuracy indicating that there is a lot of bias in the dataset. Moreover, simply adding camera embedding with the question to a 2D baseline (Table 1, 2D FILM with camera), already performs close to the best 3D model and upper bound. (Please clarify if my understanding is wrong.)\\n\\n- The paper is poorly organized and hard to follow. For example, one of the contributions of the work is the CLEVR-MTR dataset, however, there is no clear section in the main paper describing the details of how the dataset. Instead, the information about the dataset is scattered in the introduction and related work. Another example is that the paper moves into talking about the method (Section 2) without defining the task concretely. It is only from the figure that one notices that the camera viewpoint is part of the input. From the examples provided in the introduction, the reader is under the impression that the camera viewpoint has to be inferred from the question itself. Similarly, it's hard to parse what the training signal for each baseline is. Does a baseline use the rendered image from the other view during training?\\n \\n#### Minor Comments:\\n- The figures and tables are interspersed with the text making the paper harder to read. It might be better to place the figures and tables at the end of the beginning of the page so that the captions are separated from the main text.\\n- Many equations like some parts of equation 1 and equation 2 might not be necessary as they don\\u2019t seem to contribute to understanding the paper. In many places, it seems like a simple intuitive explanation would be sufficient.\\n- Similarly, Figure 3 might not be necessary.\\n- The figures are unclear and hard to understand. For example, is the canonical viewpoint part of the input? If not, Figure 2 and Figure 4 could be changed to make it more clear.\\n- Is this line correct, \\u201cIf we add camera conditioning via FILM (that is, appending the camera embedding to the GRU\\u2019s embedding) then we achieve a much greater accuracy of 69.60 \\u00b1 0.09.\\u201d Should its value be 83.68 \\u00b1 1.21 as indicated in the table?\\n \\n#### Overall Recommendation:\\nAlthough the problem could potentially be useful, the current dataset seems to be not so useful and over-simplified. Moreover, I found the paper not well-organized and hard to understand even after multiple reads. I feel the paper can be improved a lot and hence recommend rejection for the current version.\\n\\n#### Post Rebuttal\\n(Copying from the discussion below)\\n\\nI would like to thank the author(s) for their response. After going over them, I am still not very confident about the paper would stick to my initial assessment. Following are my primary concerns:\\n\\n\\\"We note that there is a distinction between wanting to see something from another point of view, versus wanting to answer a question from another point of view. The former is where re-rendering is appropriate, but we do not make the claim that this alternative (view rendering + VQA) performs better or worse empirically.\\\"\\n\\nI understand the distinction. But the issue still remains. Why is the out-of-the-box \\\"view rendering + VQA\\\" solution insufficient? Is there any empirical justification for it? If not its hard to see the value in the current setup. A potential way to address this could be to run a simple out-of-the-box \\\"view rendering + VQA\\\" baseline.\\n\\n\\\"(2) R3 and R4\\u2019s concern about camera information being provided to the model and its potential infeasibility in practice: In real world settings, camera rigs can and do have knowledge about where they are situated in the world, for instance using SLAM or GPS coordinates. In that case, it is not unreasonable for e.g. an autonomous vehicle to answer queries by performing rotations and/or translations of its current viewpoint.\\\"\\n\\nThe concern was not about the viewpoint of the observer but the new viewpoint from which the question has to be answered. Also, the location of the new viewpoint need not be converted into float and appended to the question. It could be expressed in natural language. For example \\\"viewpoint of the driver in the other car\\\" like in the example provided by the paper. In the current setup the information about this viewpoint is provided in terms of exact coordinates, which makes the setup less interesting and not so practical.\\n\\nAlthough the authors improved some of the figures, the latest version of the paper does not seem to address other clarity concerns like a clear section for the dataset; organization of text and figures; removing unnecessary equations\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"The idea is not entirely new and details are missing\", \"review\": \"Summary:\\n\\nThe paper studies visual question answering focusing on answering questions in a reference image of a different viewpoint. They propose a new dataset CLEVR-MRT drawing motivation from the well-known visual reasoning dataset CLEVR to illustrate the idea in which they have full control of the changes of viewpoints in an image. They then propose to use a volumetric encoder to represent 3D image features of an image via either 2D-to-3D projection or a contrastive-based encoder and further adapt an existing method (FiLM) to handle 3D tensors. Experiments on the CLEVR-MRT show that the use of the 2D features and 3D features of an image is complementary to each other.\\n\\nComments (Technical, Major Flaws of this paper): \\n\\n(1) The idea of addressing VQA in multi-view settings is reasonable but it is not entirely new. My main concern is at the limitations of a synthetic dataset in a controlled setting where the relations between objects are limited compared to real data. In addition, I believe that given enough such generated question-answer pairs with associated programs, models may possibly learn to decode the generation procedure under the hood instead of learning the actual semantic meanings of languages and the relations between objects.\\n\\n(2) Since there are no statistics about the newly introduced dataset, it is hard to judge the empirical results in the paper. As pointed out by many previous studies (e.g. Hudson, D.A., et al., 2019; Le, T.M., et al., 2020), models' performance seems to converge on CLEVR given enough training data. Having that said, existing methods easily fail if we reduce the number of training instances. As for the CLEVR-MRT, even without any information about the viewpoints, the baseline models could achieve more than 70% accuracy on the proposed dataset. It seems that the dataset is too simple that the model could have good performance without knowing the camera parameters. This leads to concerns about the validity of the proposed dataset. Please address these points.\", \"references\": \"- Le, T. M., Le, V., Venkatesh, S., & Tran, T. (2020). Dynamic Language Binding in Relational Visual Reasoning. In IJCAI 2020.\\n - Hudson, D. A., & Manning, C. D. (2018). Compositional attention networks for machine reasoning. In ICLR 2019.\\n\\n(3) For those who are not familiar with the CLEVR dataset, briefly explaining the procedure to generate the dataset and its variants might be helpful. \\n\\n(4) Given a question related to the object positions, there may exist many different views that provide the same answer. Let's take the question \\\"How many green spheres to the left of the shiny gold thing?\\\" in Figure 4 as an example. There are many views in the scene that provide the correct answer \\\"1\\\" for this question. Without restricting the variance of the camera view (as in [1]), how can we ensure the model to infer the correct viewpoint?\", \"some_typos\": [\"(1): 1. Introduction: We use the the Compositional -> We use the Compositional\", \"(2): 2.1 FILM Baseline: the viewpoint and canonical view is the same thing -> the viewpoint and the canonical view are the same thing\", \"(3): Figure 2: The dotted border on the ResNet-101 indicate -> The dotted border on the ResNet-101 indicates\", \"(4): Conclusion: In the case of an autonomous vehicles -> In the case of autonomous vehicles\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"The authors propose to learn mental rotations via a synthetic CLEVR-Mental Rotation dataset based on VQA\", \"review\": \"Pros:\\n\\n1. The paper presents an interesting idea to learn mental rotations using a variation of the CLEVR-VQA dataset. The contributions are - the creation of this synthetic CLEVR-Mental Rotation dataset for targeting this problem and a model that encodes questions and viewpoint information to produce answers via FiLM based encoders and 3D volume encoder. \\n2. The results in Table 1 and Table 2 show improvements with respect to the baselines using their final model but there is still some concern in the improvements on their ablations. \\n3. The paper is well written and easy to understand.\", \"cons\": \"1. The motivation of why we need to learn mental rotations is not very clearly expressed, the practical examples given in the introduction are not sufficient. Does the model really learn these mental rotations from a simple spatial VQA task? This should be evaluated in the experiments either using activation maps or by visualizing intermediate 3D encodings. \\n2. Is the model trained on all views for a single question-view pair or any one random viewpoint is sampled during mini batch training ? Does the rotation of a scene done over the complete 360 degree ? How do you decide how much to rotate to generate a viewpoint ?\\n3. The self supervised learning of 3D volumes is an interesting idea, but it's use case in this particular problem is very weakly motivated both in experiments and theory. Why is this method better than the method discussed in Section 2.2.1? What is 3D data augmentation and how is it different from 2D data augmentation? \\n4. There is a large variance in some experiments in Table 1. Is it due to the camera transformation embedding? It will be good to discuss the reasons why this is in Table 1 and not in Table 2. \\n5. Although the models developed are used in a very different problem setting with minor contributions, still a large part of the methods seem to be derived from the literature. \\n6. The final results in Table 2 though argued are better due to small variance but more extensive experiments need to be performed to show the benefits of the self-supervised pre-training over the traditional encoder approach.\", \"minor\": \"What is the value of t (tau) used in Eq 3 ? In Table 2 it shows 1.0, but in the text it\\u2019s discussed as 0.1. Is this a typo or both of them are supposed to be different, if yes why ?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
mZLhA0xFGmR | Deep Gated Canonical Correlation Analysis | [
"Ofir Lindenbaum",
"Moshe Salhov",
"Amir Averbuch",
"Yuval Kluger"
] | Canonical Correlation Analysis (CCA) models can extract informative correlated representations from multimodal unlabelled data. Despite their success, CCA models may break if the number of variables exceeds the number of samples. We propose Deep Gated-CCA, a method for learning correlated representations based on a sparse subset of variables from two observed modalities. The proposed procedure learns two non-linear transformations and simultaneously gates the input variables to identify a subset of most correlated variables. The non-linear transformations are learned by training two neural networks to maximize a shared correlation loss defined based on their outputs. Gating is obtained by adding an approximate $\ell_0$ regularization term applied to the input variables. This approximation relies on a recently proposed continuous Gaussian based relaxation for Bernoulli variables which act as gates. We demonstrate the efficacy of the method using several synthetic and real examples. Most notably, the method outperforms other linear and non-linear CCA models. | [
"representations",
"cca models",
"number",
"variables",
"transformations",
"input variables",
"cca",
"models"
] | Reject | https://openreview.net/pdf?id=mZLhA0xFGmR | https://openreview.net/forum?id=mZLhA0xFGmR | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"7OItmzpyghr",
"YEyAeZBsEtu",
"InP7y7fCC8W",
"rWE5HoiDBew",
"KFwh2klSf6J"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040514268,
1604271907126,
1604215253347,
1603998175173,
1603881615561
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3389/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3389/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3389/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3389/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"The paper proposes an approach to sparse CCA with deep neural nets, performing simultaneous feature selection with stochastic gating and canonical correlation maximization. The reviewers think that there is merit in defining an objective function that optimizes the goals jointly throughout the networks. However, the paper has not clearly presented the novelty in methodology. In particular, the reviewers agree that the paper needs to clearly distinguish itself from the two building blocks (Andrew et al. 2013 and Louizos et al. 2017), and demonstrate the significance of combining the two techniques theoretically and/or experimentally. Also, there is a large literature in sparsifying classical method. Sufficient discussions and comparisons with prior work can better position the current work in the literature.\"}",
"{\"title\": \"A method for deep sparse canonical correlation analysis\", \"review\": \"1. Paper summary:\\n\\nThis paper proposes a DL method for learning sparse non-linear transformations that maximize correlations between two views. In particular, each view is passed through a separate network. Stochastic Gating is applied to the input layer of each network. The two networks are jointly trained by maximising the correlation between their outputs. Sparsity is obtained by imposing L0 regularization terms on the Stochastic Gating variables.\\n\\n2. Strong points of the paper:\\n\\nStochastic Gating gives way to an objective function that can be optimized through Stochastic Gradient Descent.\\n\\nThe method can detect correlation between two views even when data size is less than the number of dimensions, as demonstrated by the experimental results.\\n\\n3. Weak points of the paper:\\n\\nThe proposed method is very similar to DCCA paper of Andrew et al. The only difference is Andrew et al. use L2 regularization, while the authors use L0 regularization.\\n\\nSimilarly to DCCA, the method suffers from the two issues.\\n\\nFirst, the method learns non-linear transformations that however are hard to interpret. Non-linear CCA can be achieved by learning linear transformations through non-linear correlation measure, such as HSIC. HSIC-CCA [1] can also learn sparse representations. Given the rising importance of explainable AI, non-linear transformations seem to be a drawback.\\n\\nSecond, the method relies on Stochastic Gradient Descent. However, the loss function is not decomposable into batches. This makes batch training somewhat random.\\n\\n4. Conclusion:\\n\\nFor the above reasons, I find the contributions of the paper to be marginal.\\n\\n[1] Billy Chang, Uwe Kr\\u00fcger, Rafal Kustra, Junping Zhang: Canonical Correlation Analysis based on Hilbert-Schmidt Independence Criterion and Centered Kernel Target Alignment. ICML (2) 2013: 316-324\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"The paper presents a new deep CCA method that applies gating to input variables using a latent clipped Gaussian random variable to avoid overfitting. The total novelty and technical contributions seems limited.\", \"review\": \"This paper presents a new deep CCA method to learn non-linear relationships between two modalities. It trains two neural networks each for a modality to maximize the total correlations of their output representations. Gating is applied to input variables by associating each with a latent Bernoulli variables which is then relaxed with the clipped Gaussian random variable. Experiments on one synthetic and two real datasets demonstrate the superiority of the proposed method.\\n\\nBelow are specific comments.\\n\\n1. In the last sentence of Section 2.2, it is unclear to me how the stochastic part of the gates is removed to determine whether $z_x[i_x]$ is equal to or larger than zero. Is $z_x[i_x]$ determined based on the estimated $\\\\mu_x[i_x]$: $z_x[i_x] > 0$ if $\\\\mu_x[i_x] > 0$ and $z_x[i_x] = 0$ otherwise?\\n\\n2. Minor comments (notation inconsistencies/abuse, typos, etc.):\\n\\nThe sentence \\\"For example, in biology ... and engineering (Chen et al., 2017)\\\" is not complete (sentence fragment). Please rephrase it or join it to the preceding sentence.\\nIs \\\"the degeneracy inherit to $N < D_x,D_y$\\\" supposed to be \\\"the degeneracy inherent to $N < D_x,D_y$\\\"?\\nThe word \\\"interpetability\\\" is misspelled.\\nIn the middle subfigure of Figure 1, it is clearer if the label $\\\\epsilon$ and tick values $\\\\{-0.5,0,0.5\\\\}$ are added along the horizontal axis.\\n\\\"straight forward\\\" should be spelled as \\\"straightforward\\\" (no space).\\n\\nEq. (4): It seems $\\\\boldsymbol{z}_x^T \\\\boldsymbol{X}, \\\\boldsymbol{z}_y^T \\\\boldsymbol{Y}$ should be written as $\\\\mathop{{\\\\rm diag}}\\\\left(\\\\boldsymbol{z}_x\\\\right) \\\\boldsymbol{X}, \\\\mathop{{\\\\rm diag}}\\\\left(\\\\boldsymbol{z}_y\\\\right) \\\\boldsymbol{Y}$. Note that the $\\\\boldsymbol{X}, \\\\boldsymbol{Y}$ here represent the observed data matrices of dimensions $D_x \\\\times N, D_y \\\\times N$ [rather than random vectors based on which the data are observed].\\n\\nIn Section 2.2, third line, the expression of the regularization:\\n- $\\\\mathbb{P}(\\\\boldsymbol{z}_x[i] \\\\geq 0)$ should be $\\\\mathbb{P}(\\\\boldsymbol{z}_x[i] > 0)$ or $\\\\mathbb{P}(0 < \\\\boldsymbol{z}_x[i] \\\\leq 1)$;\\n- For consistency, it should write $\\\\|\\\\boldsymbol{z}\\\\|_0$ as $\\\\|\\\\boldsymbol{z}_x\\\\|_0$ and the index $i$ as $i_x$.\\n\\n\\\"a similar notations\\\" should be \\\"similar notations\\\".\\n\\nIn Section 2.2, first paragraph, last sentence \\\"The total correlation in Eq. 4 can be expressed using the trace of ...\\\"\\n- \\\"total correlation\\\" should be \\\"total squared correlation\\\".\\n\\nIn Section 3.1, second paragraph, $\\\\hat{\\\\rho}=\\\\bm{\\\\hat{\\\\phi}}\\\\boldsymbol{X}\\\\boldsymbol{Y}^T\\\\bm{\\\\hat{\\\\eta}}^T$ should be $\\\\hat{\\\\rho}=\\\\bm{\\\\hat{\\\\phi}}^T\\\\boldsymbol{X}\\\\boldsymbol{Y}^T\\\\bm{\\\\hat{\\\\eta}}$.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Gated CCA\", \"review\": \"This paper combines an approximate $L_0$ regularization on the canonical vectors with CCA to encourage the CCA for getting sparse vectors. In addition, the CCA is computed on embeddings from a neural network, which make it possible to capture non-linear correlations.\\n\\nOverall, the paper is well written and easy to follow. The paper seems to be a combination of deep CCA (Andrew et al. 2013) and Louizos et al. 2017. In particular, the $L_0$ regularization approximation is very similar to that proposed in Louizos et al. 2017. It would be great if the authors could be more clear on illustrating the differences (if any). Therefore, the novelty of this paper is unclear. \\n\\nThe experiments could be improved. Since most of the experiments were carried out on relatively small datasets with reasonable sized model, it would be great to have multiple runs that illustrate the stability/variance of the method. In addition, the major benefit of using neural networks as embedding function is the ability to capture non-linear relationships. It would be great to add a synthetic example to illustrate this benefit. The authors mentioned the use of early stopping and hyper-parameter selection, however, it is not clear based what criteria those actions were carried out. My guess is that it is based on the objective in eq. 4 on the validation set. It would be great if the authors could make this clear, because from the synthetic experiments, $\\\\lambda$ plays a quite important role for the final performance.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Sparse Deep CCA variant building on existing components with good but somewhat partial empirical evaluation\", \"review\": \"Summary: The authors propose a new non-linear CCA variant that learns mappings that are sparse with respect to the input variables, using approximate $l_0$ regularisation, to improve performance for applications with large number of features but few samples.\", \"reasons_for_score\": \"I am leaning towards rejection due to the straightforward nature of the work. The method combines two existing techniques in fairly obvious way and despite good empirical comparisons has also issues in evaluation since more recent comparison methods are missing.\", \"detailed_feedback\": \"The related work and importance of the application are well covered, and the technical solution is sound. The conceptual novelty is, however, fairly limited; several sparse CCA variants have been proposed in the past and switching to proper sparsity ($l_0$ vs more common $l_1$) is a natural thing to do. Furthermore, in recent years technical solutions building on stronger sparsity priors have been proposed for closely related models (e.g. Boyveyron et al. \\\"Bayesian variable selection of globally sparse probabilistic PCA\\\" (2018) discusses this in detail for PCA, and many algorithmic details for PCA generalise easily for CCA by interpreting CCA as group-sparse PCA).\\n\\nThe specific technical solution presented here appears to be new, but builds directly on existing and relatively obvious choices: The loss matches Andrew et al. (2013) and the $l_0$ approximation is from Yamada et al. (2020). Even though the specific formulation in the latter is recent, the underlying auxiliary variable construct has been used for similar purposes before. The simplicity of the technical approach is highlighted by the fact that the whole model description takes only slightly more than one page of the paper. In summary, the paper does not make fundamental conceptual or technical contributions. It certainly has potential for being a useful practical tool for the task, but the required scientific insight is limited.\\n\\nThe empirical demonstrations are nice and illustrative, but carried out on somewhat simplified benchmark data. They do show that the method works well in comparison against reasonably chosen competing methods, but do not clearly indicate qualitative change in CCA applications. The advantage over ordinary DCCA, published already 7 years ago, is not particularly striking in Table 2, and in recent years quite a few deep CCA variants have been proposed but are not compared against (or cited). Consequently, we cannot really evaluate whether this advances the field in practice; there is potential, but as it is the empirical comparisons do not seem sufficient to overcome the lack of technical and conceptual contribution.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
5NA1PinlGFu | Colorization Transformer | [
"Manoj Kumar",
"Dirk Weissenborn",
"Nal Kalchbrenner"
] | We present the Colorization Transformer, a novel approach for diverse high fidelity image colorization based on self-attention. Given a grayscale image, the colorization proceeds in three steps. We first use a conditional autoregressive transformer to produce a low resolution coarse coloring of the grayscale image. Our architecture adopts conditional transformer layers to effectively condition grayscale input. Two subsequent fully parallel networks upsample the coarse colored low resolution image into a finely colored high resolution image. Sampling from the Colorization Transformer produces diverse colorings whose fidelity outperforms the previous state-of-the-art on colorising ImageNet based on FID results and based on a human evaluation in a Mechanical Turk test. Remarkably, in more than 60\% of cases human evaluators prefer the highest rated among three generated colorings over the ground truth. The code and pre-trained checkpoints for Colorization Transformer are publicly available at https://github.com/google-research/google-research/tree/master/coltran | [
"colorization transformer",
"grayscale image",
"colorization transformer colorization",
"novel",
"colorization proceeds",
"steps",
"conditional autoregressive transformer",
"architecture"
] | Accept (Poster) | https://openreview.net/pdf?id=5NA1PinlGFu | https://openreview.net/forum?id=5NA1PinlGFu | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"Z1D2IIuo5ab",
"CXBmHZzufTC",
"o3RTc03AFF_",
"pa0_QH7KcHr",
"5MoaqioViEY",
"TPTSk_hj0NC",
"eb8g8LJaz96",
"WDPGI8V0y5",
"82NU_eMgcsV",
"ZCar2M-smP",
"lSNRY4-0hgd",
"IUQ6RLQFgRR",
"v6Tpi142mBb",
"Ch646_YhDE1"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040463435,
1606197977816,
1605717431535,
1605717019686,
1605714846514,
1605712229267,
1605710653815,
1605707756523,
1605707216942,
1605706708622,
1603943684088,
1603943667668,
1603915973747,
1603340156951
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3388/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3388/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3388/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3388/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3388/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3388/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3388/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3388/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3388/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3388/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3388/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3388/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3388/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Poster)\", \"comment\": \"The paper initially received a mixed rating, with two reviewers rate the paper below the bar and two above the bar. The raised concerns include the need for an autoregressive model for upsampling and the effect of batch sizes. These concerns were well-addressed in the rebuttal. Both of the reviewers that originally rated the paper below the bar raise the scores. After consulting the paper, the reviews, and the rebuttal, the AC agrees that the paper has its merits and is happy to accept the paper.\"}",
"{\"title\": \"End of discussion phase\", \"comment\": \"Hi all,\\nSince the end of the discussion phase is fast approaching, we would like to know if we our rebuttal helped to clarify some concerns. If there were other concerns that we could help to clarify, please do let us know.\\nThanks.\"}",
"{\"title\": \"Rebuttal\", \"comment\": [\"Thanks everyone for your time and reviews. Here are a summary of the changes\", \"**Writing**\", \"Restructured the end of the introduction. It now highlights the motivation of the network and contributions of the paper.\", \"Moved **Row and Column Self Attention** and **Axial Transformer** to **Section 3 Background: Axial Transformer**\", \"Expanded *subsection \\u201cAxial Transformer\\u201d with paragraphs **Outer Decoder, Inner Decoder and Encoder** and equations.\", \"Merged the remainder of the \\u201cModel\\u201d section and \\u201cArchitecture\\u201d into a **Section 4 Proposed Architecture**\", \"Expanded the **Ablation Studies (Section 5.2)** to add some insights from each experiment\", \"Added citations to all models in **Table 2**\", \"Added a small background on autoregressive models to **Appendix A**\", \"Added #parameters and inference speed comparisons to the **Appendix H**\", \"Edit: Nov 20, provided a bit more detail on the semi-parallel sampling mechanism.\", \"**Experiments**\", \"Added out-of-domain colorizations on Celeb-A and LSUN **Appendix D**\", \"Added experiments on the low-batch size regime **Appendix E**\", \"Added final FID results on the baseline Axial Transformer to **Table 2 (ColTran-B)**\", \"Added shift modulation experiments to the graph in **Figure 3**.\", \"Added global conditioning on cAtt and cMLP experiment **Appendix G**\", \"Given the strong empirical performance and experimentation (mentioned by multiple reviewers) and our now improved writing,\", \"We hope that all our contributions as a whole would be of interest to the ICLR audience.\", \"Please reconsider your scores after the rebuttal revision.\"]}",
"{\"title\": \"Review Response: AnonReviewer 1\", \"comment\": \"Thanks for your reviews. Please find attached our response.\\n\\n**Additional ablations**\\n* **Scale-only modulation** - We added this curve to **Figure 3**. Scale only modulations perform much better than shift only modulations. Our intuition is that scaling allows to increase or decrease per-pixel activations more easily as compared to biasing. We speculate that this could be useful (for eg to turn off the contributions of individual pixels based on the context when we compute dot products for self-attention)\\n* **Global context for cMLP / cATT** - We added this curve to **Appendix G**. Our model performs much worse.\\nOur intuition follows from 1). A global context for cAtt means that all key, query and value pairs are scaled/biased by constant values $c_k, c_q$ and $c_v$ The dot-product between k and q would be either $c_k * c_q * k \\\\cdot q$ (for scaling) and $k \\\\cdot q + c_k \\\\sum{q} + c_q \\\\sum{k} + c_k * c_q$. We speculate that this can have a net-effect in increasing the magnitude of dot-product attention and may lead to difficulties in optimization. Elementwise operations (as done in cAtt) are more flexible. For cMLP this effect is purely empirical.\\n* **cLN with mean pooling performing worse than no cLN** We expanded a bit on this **Section 5.1**\\nA fixed mean pooling layer forces all the cLN layers to use the same global representation with the same weightage per-pixel. The ablation indicates, it is likely that different global representations are meaningful for different cLN layers. Allowing the per-pixel weights to be learnable offers some degrees of freedom and hence different LayerNorm layers can make use of different aggregated global representations.\\n\\n**Number of parameters / training / inference speed**\\nWe added a small section in **Appendix H** comparing PixColor to ColTran\\n* **Training parameters**: ColTran has a total of ColTran core (46M) + Color Upsampler (14M) + Spatial Upsampler (14M) = 74M parameters. This is lesser than PixColor that has Conditioning network (44M) + Colorizer network (11M) + Refinement Network (28M) = 83M parameters.\\n* **Inference speed** ColTran core can sample 64x64 grayscale images in 4-5 minutes P100 GPU vs PixColor that takes ~10 minutes to colorize 28x28 grayscale images on a K40 GPU. Sampling 28x28 colorizations takes just around 30 seconds. The upsampler networks take in the order of milliseconds.\\n* **Training time** Our training time is comparable to PixColor (~3 days). However, we are able to reach FID scores of 20.38 within a single day as compared to PixColor\\u2019s final FID 24.32\\n\\n**References**\\nWe added references to all baselines in Table 2. They are described in the related work section.\\n\\n**Technical novelty**\\nWe added a section to the introduction better highlighting the contributions of this paper.\\n* First application of transformers for high-resolution ($256 \\\\times 256$) image colorization.\\n* We introduce conditional transformer layers for low-resolution coarse colorization in Section 4.1. The conditional layers incorporate conditioning information via multiple learnable components that are applied per-pixel and per-channel. We validate the contribution of each component with extensive experimentation and ablation studies.\\n* We propose training an auxiliary parallel prediction model jointly with the low resolution coarse colorization model in Section 4.2. Improved FID scores demonstrate the usefulness of this auxiliary model.\"}",
"{\"title\": \"Review Response: AnonReviewer 4 (Part 3)\", \"comment\": \"**Q: It seems like a lot of compute (16 TPUv2) was used and the batch size was relatively large. Is the large batch size necessary for obtaining these results, or could a smaller amount of compute and smaller batch size be used?**\\n\\n16 TPUv2 chips are the second lowest configuration available to us. As requested, we additionally trained ColTran core and the upsamplers on 4 TPUv2 chips (the lowest configuration) with a reduced-batch size of 56 and 192 each. For the spatial upsampler, we found that a batch-size of 8 was sub-optimal and led to a large deterioration in loss. We thus used a smaller spatial upsampler with 2 axial attention blocks with a batch-size of 16 and trained it also on 4 TPUv2 chips. Our FID drops from 19.71 to 20.9 which is still significantly better than the other models.\\nWe note that in this experiment, we use just 12 TPUv2 chips in total while PixColor uses a total of 16 GPUs.\\nWe added the above analysis to **Appendix E**\\n\\n**Q: Why does training baselines with 2x and 4x wider MLP dimensions make \\u201ca fair comparison\\u201d? Is \\u201cBaseline\\u201d in Figure 3, x1 (standard) MLP but no conditioning? Why would x1 be better than x4, but worse than x2?**\\n\\nWe edited the corresponding subsection in **Section 5.2**. Our baselines MLP2x and MLP 4x (now renamed to ColTran-B 2x and ColTran-B 4x) are original Axial Transformer networks that condition via just skip-connections. Both *ColTran-B 2x* and *ColTran-B 4x* have an increased parameter count via $1 \\\\times 1$ dense layers which are the same operations due to which ColTran has an increased parameter count. So it makes for a fair comparison. Our results show that the increased performance cannot be explained solely by the fact that our model has more parameters.\", \"re\": \"why x1 performs better than x4, this is purely empirical. Our intuition is that sometimes wider networks can lead to worse performance due to the difficulty in optimization. We ran a small hyperparameter sweep over the learning rates for x1, x2 and x4 and report the best performance.\\n\\n**Q: The caption of Figure 2 feels a bit imbalanced.**\\nWe expanded the caption of Figure 2 to give a short description about both ColTran core and the upsamplers. The figure now depicts the \\\"outer decoder\\\", \\\"inner decoder\\\" and \\\"encoder\\\" which were contributions of [Ho et.al 2019]. We now clarify our contributions in the caption itself.\\n\\n**Q: Auxiliary parallel head**\\nThis is a contribution of our paper. As noted. we investigate the impact of this auxiliary parallel head in Section 5.3. We added a few words in Section 4.2 that we will study the effect of this later in Section 5.3.\\n\\n**Q: Adapt the Axial Transformer model for colorization.**\\nWe removed this. All our architectural modifications or adaptations (i.e the conditional transformer layers and auxiliary parallel head) are now described in Section 4.3 and background information is described in Section 3.\\n\\n**Q: Number of axial attention blocks**\\n\\nWe believe that this is a hyperparameter of the network similar to the learning rate, optimizer choice and hidden size. We did a very small sweep using the baseline axial transformer (no conditional layers) with the following configurations to come with this number.\\n* hidden size = 512, number of blocks = 4\\n* hidden size = 1024, number of blocks = 2\\n* hidden size = 512, number of blocks = 2\\n\\nOnce we found the optimal configuration, we fixed this for all future architecture design.\\nWe added the above to **Appendix F**\\n\\n**Q: In model, ColTran core\\u2026.**\\n\\nIn our latest version of the paper, the \\u201cColTran core\\u201d paragraph and its corresponding equations have been merged in **Section 4.1 ColTran core**. We introduce the terminology axial transformer and axial attention in **Section 3** before describing these components.\\n\\n\\n**Q: The second paragraph under ColTran Upsamplers (In our experiments\\u2026) is slightly confusing.**\\n\\nWe have moved this explanation to the end of the current section 4.3.\\nWe upsample all pixels in parallel (parallel upsampling) to predict a distribution over each-pixel in the high resolution image (Eq 6). Instead of sampling from this predicted distribution, we instead use the argmax. There might be a slight confusion between \\\"upsampling\\\" and \\\"sampling\\\" which we clarify.\\n\\n**Q: Out of domain images might be interesting**\\n\\nColorizations of out-of-domain datasets **LSUN-bedrooms** and **Celeb-A** have been added to **Appendix D**. We neither cherry picked these images nor finetune/retrain our model on these datasets. Barring a couple of outliers, our colorizations are realistic. We will colorize \\u201cin-the-wild\\u201d grayscale images for the final version.\\n\\n**Q: Probably be best to additionally include the citation here**\\nWe added the citation of PixColor to this section.\\n\\nWe believe we have clarified all your comments and improved the writing. We are looking forward to reading your updated impression of the paper.\"}",
"{\"title\": \"Review Response: AnonReviewer 4 (Part 2)\", \"comment\": \"**Ablation studies**\\n\\nWe rewrote the Ablation Studies subsection of the paper. The \\u201cconditioning details\\u201d section is expanded with bullet points providing a high-level motivation for each experiment. We add it here for convenience.\\n\\n* We added final FID numbers for the baseline Axial Transformer that conditions only via skip-connections without conditioning layers (**ColTran - B**) in Table 2. The baseline achieves a FID Score of 21.6 (significantly better than the baselines but much worse than ours)\\n* **Importance of each conditional component:** We perform a leave-one-out study to determine the importance of each conditional component. We remove each conditional component one at a time and retrain the new ablated model. The curves *no cLN*, *no cMLP* and *no cAtt* in the middle of Figure 3 quantifies our results. While each conditional component improves final performance, cAtt plays the most important role.\\n* **Multiplicative vs Additive Interactions:** Conditional transformer layers employ both conditional shifts and scales consisting of additive and multiplicative interactions, respectively. The curves *Scale* and *Shift* on the right hand side of Figure 3 demonstrate the impact of these interactions via ablated architectures that use conditional shifts and conditional scales only. While both types of interactions are important, multiplicative interactions have a much stronger impact.\\n* **Context-aware dot product attention:** Self-attention computes similarity between pixel representations using a dot product between $k$ and $q$, cAtt applies conditional shifts and scales on $q$, $k$ and allow modifying this similarity based on contextual information. The curve *cAtt, only v* depicts that removing this property, by conditioning only on $v$ leads to worse results.\\n* **Fixed vs adaptive global representation** cLN aggregates global information with a flexible learnable spatial pooling layer. We experimented with a fixed mean pooling layer forcing all the cLN layers to use the same global representation with the same per-pixel weight. The curve *cLN, mean pool* on the right of Figure 3 shows that enforcing this constraint causes inferior performance as compared to even having no cLN. This indicates that different aggregations of global representations are important for different cLN layers.\\n* We expanded the caption in Figure 3 and added a short description of what each label means . This is meant to be a visual aid to the more detailed explanations in Section 5.2\\n* We moved the curve computing Gated operations for conditioning and the additional ablation suggested by AnonRev1 to the **appendix G**\\n\\n**Other questions**\\n\\n**Q: Overall it seems like every generated image has a red, green, and blue variant. Were they sampled in a particular manner to guarantee this? Obviously it is possible to draw other samples, but do they all largely fall into one of these three coarse categories?**\\n\\nAll our generated images are displayed with pixel-by-pixel sampling. They were not sampled in any other manner. We analyzed what the most dominant coarse color is per-image across 5000 images. The dominant hues ordered by counts are black, white, brown, blue and green. Here is the color band of the top 50 colors (https://ibb.co/Jx9htXq)\\n\\n**Q: When the performance is poor for a given sample, it usually because entire swaths of the image are being painted in with a very non-natural color (like someone\\u2019s face being green, or the entire picture having a blue-ish exposure). Can you speak to this and other common \\u201cmistakes\\u201d that are observed?**\\n\\nThis is true. Every now and then, we can sample a coarse color for a pixel that has low probability and which the model has not seen before. This can then have a cascading effect leading to such mistakes. Some other mistakes which achieved a 0% fool rate are in Appendix I:\\n* Color bleeding when edges are not detected correctly.\\n* Inability to color highly complex scenes, such as large no of small objects and complex textures, e.g the dress of a soldier.\\n* Once in a while, also we observe that the model returns the grayscale image as a sample. But this is pretty rare.\\n\\n**Q: How do these compare with some of the other methods you compared yours against? Are there simply fewer \\u201cmistakes\\u201d (i.e. non-natural images), or are the types of imperfections created by this approach different that would warrant different use-cases?**\\nArtifacts such as color-bleeding and unnatural colors are common among the probabilistic colorization models that we compare against on inspection of the samples. You are right that on an average our model generates more natural colorizations avoiding such artifacts given the human evaluation results.\\n\\nAutoregressive colorization also has a human-in-the-loop use case. For every-pixel, the model can display x most probable colors that the user can choose from and the colorization can be guided by the user.\"}",
"{\"title\": \"Review Response: AnonReviewer 4 (Part 1)\", \"comment\": [\"We thank you for investing a significant amount of time in providing detailed reviews and to help us improve the quality of the paper. We have significantly restructured the writing to incorporate your suggestions. We first address the most pressing concerns followed by the minor comments. All our responses are reflected in the latest version of the draft.\", \"**Motivation for using axial transformer**\", \"A summarized version of the points below is added to the end of **Introduction**\", \"Axial Transformer achieves state-of-the-art on unconditional image generation (at the time of submission) measured using bits-per-pixel on ImageNet32 and Imagenet64 without the usage of custom kernels. It is thus very appealing to use as an autoregressive backbone for colorization.\", \"ColTran shares the highly useful advantages of Axial Transformer which is the ability to capture a global receptive field with two layers and efficient implementation using matrix multiplication on modern accelerators such as TPUs.\", \"The semi-parallel sampling in Axial Transformer enables us to sample colorizations much faster than prior autoregressive colorization models. As a result, ColTran core can sample 64x64 grayscale images in around 5 minutes P100 GPU vs PixColor that takes ~10 minutes to colorize 28x28 grayscale images on a K40 GPU. Sampling 28x28 colorizations takes just around 30 seconds.\", \"AxialDeepLab, a model that applies axial self-attention to semantic segmentation (which at core is the same technique as Axial Transformer modulo masking operations) was recently accepted to ECCV 2020 (https://arxiv.org/abs/2003.07853). We added a citation to this paper in the introduction.\", \"The openreview of the Axial Transformer ICLR indicates the method was rejected primarily due to lack of clarity on the contribution of the paper. Two reviewers point out that the claim of the paper was a general purpose technique to improve self-attention in multidimensional transformers while the scope of the paper is indeed limited to autoregressive image modelling.\", \"The code for the Axial Transformer is fully open sourced, which can help in removing any ambiguity about the implementation details. We will open source our code as well.\", \"**More explanation on Axial Transformer**\", \"We expanded the subsection that explains Axial Transformer in section 3.2.\", \"The section now contains 3 paragraphs describing the outer decoder, inner decoder and encoder with their corresponding equations.\", \"We added a couple of sentences describing the semi-parallel sampling scheme of the Axial Transformer.\", \"**Improving the flow of the paper**\", \"As requested, we restructured the methods / architecture section in the latest version of the draft and made the following changes. We hope the new narrative is clearer.\", \"We introduced Section 3 \\u201cBackground: Axial Transformer\\u201d and moved the subsections \\u201cRow and Column Self-Attention\\u201d and \\u201cAxial Transformer\\u201d into this. This section is meant for the paper to be self-contained. All terminology revolving the Axial Transformer and axial self-attention is introduced in this section.\", \"We combined Section 3 \\u201cModel\\u201d and Section 4 \\u201cArchitecture\\u201d, to form a \\u201cProposed Architecture\\u201d section to make it more intertwined. There are the changes.\", \"**Introduction of Section 4**: Conditional distributions modeled by the three networks.\", \"**Section 4.1**: ColTran core, the equation and the architectural modifications.\", \"**Section 4.2**: Auxiliary parallel head and their equation.\", \"**Section 4.3**: The color and spatial upsamplers and their equations.\", \"In short, Section 3 now contains the background material and Section 4 contains the modifications for high-resolution colorization which are contributions of this work.\", \"**Results in Table 2**\", \"We added citations to all the baseline models in Table 2 and what underlying generative model they rely on in the Related Work section,\", \"We obtained results on FID from cINN [Ardizzone et.al, 2019] and the results on human evaluation results from PixColor. To compute the FID results of PixColor, we used 5000 samples which were provided by the original authors.\", \"All of these techniques perform high resolution colorization from a grayscale image, so the numbers are directly comparable. CNN is a deterministic baseline used in [Ardizzone et al 2019]. We removed it as CIC, LRAC and LTBC are also based on deterministic convolutional neural networks.\"]}",
"{\"title\": \"Review Response: AnonReviewer 2\", \"comment\": \"We thank you for your reviews. Please find attached our response.\\n\\n**Effect of the auxiliary parallel model**\\nWe added a bit that this helps to capture global structure in Section 4.2. We perform a detailed empirical analysis on the effect of this model in **Section 5.3**\\n\\n**Upsamplers**\\nWe would like to clarify that our upsampling is done by parallel self-attention based models and not autoregressive. However, it is true that upsampling / refinement can be potentially done by convolutional architectures. However:\\n* The spatial refinement network in PixColor uses 28M parameter whereas our spatial upsampler uses just 13M parameters. We require deeper convolutional networks to perform upsampling.\\n* From a practical perspective, it makes our architecture a bit more complicated. Currently our architecture is conceptually simple and employs only axial attention blocks with optional / conditioning + masking. In future, we could explore combining convolutions + attention in different parts of the network to improve colorization performance.\\n\\n**Other**\\n* Yes, that is true, we used the same learning rate for all models. The spatial upsampler receives gradients (albeit correlated) from 256x256 pixels, as compared to the colorizer and the color upsampler, which might explain why a smaller batch-size for the spatial upsampler is sufficient. We did not tune hyperparameters extensively. Once we found an architecture that colorizes low resolution images coarsely, we used the same training and architecture setup for the color and spatial upsampler. It may be possible to improve our results with an extensive hyperparameter sweep. \\n* We train our colorizer for 450K steps, the color upsampler for 300K steps and spatial upsampler for 150K steps. In general, we found longer training to improve performance for the colorizer and not so much for the upsamplers. We added this to the training subsection in **Section 5.1**\\n* Applying a cosine learning rate schedule requires 2 hyperparameters to tune whereas we apply polyak averaging with a value of 0.999, In future, we can experiment with different learning rate schedules.\\n* ColTran core can sample 64x64 grayscale images in 4-5 minutes P100 GPU vs PixColor that takes ~10 minutes to colorize 28x28 grayscale images on a K40 GPU. Sampling 28x28 colorizations takes just around 30 seconds. The upsampler networks take in the order of milliseconds. We added this analysis to **Appendix H**.\"}",
"{\"title\": \"Review Response: AnonReviewer 3 [2 / 2]\", \"comment\": [\"**Evaluation**\", \"We added the most recent baseline in the colorization literature cINN, [Ardizzone et al, 2019], that uses a combination of a VGG network + Glow [Kingma et al, 2018] in Table 2. The model performs slightly worse than PixColor.\", \"We extend the original axial transformer [Ho 2019 et al] for image colorization which is a contribution of our work. This is a strong baseline in itself. (ColTran - B) in Table 2 and Figure 3.\", \"We improve upon this baseline significantly by our architectural modifications. Comparisons to ColTran-B are provided in Table 2 and Figure 3.\", \"We did an extensive literature survey on existing generative colorization techniques in Section 2 as noted by.Reviewer 1. PixColor, despite coming out in 2017, is still a state-of-the-art colorization model.\", \"Image colorization is a underexplored yet important research area (noted by Reviewer 1 and 4). Hence it is not entirely surprising that there is a lot of scope to improve existing state-of-the-art colorization techniques as compared to unconditional image generation.\", \"The quality of our colorizations are almost imperceptible from the ground-truth barring a few outliers as reflected in mechanical turk results.\", \"All-in-all, we believe advancing the state-of-the-art significantly (~20% relative improvement) over prior techniques which all have FID scores between 24 and 26 and setting a strong baseline for future research in colorization should be considered a positive of the paper and not a negative.\", \"We believe that our latest version of the draft plus our response clarifies some of the concerns raised.\"]}",
"{\"title\": \"Review Response: AnonReviewer 3 [1/2]\", \"comment\": [\"We thank you for your reviews. Please find attached our response. We believe the latest version of the draft improves clarity, highlights the contributions of our work and motivates the architecture better.\", \"**Technical contributions**\", \"We have added a summarized version of the following points to the end of the **Introduction (Section 1)** highlighting the technical contributions of our paper.\", \"First application of transformers for high-resolution ($256 \\\\times 256$) image colorization. Axial transformers which we base our technique upon were initially applied to model only low resolution $64 \\\\times 64$ images. Other related techniques that rely exclusively on self-attention to model images such as the Sparse Transformer [Child et. al 2019] and Image Transformer [Parmar et. al 2019] limit to resolutions of 64x64 and below. Scaling transformers to the task of coloring $256 \\\\times 256$ grayscale images or equivalently modeling $\\\\sim$ 200K symbols is a challenging task and our paper accomplishes this task quite successfully.\", \"While Axial Transformers support conditioning by biasing the input, we find that **directly conditioning the transformer layers** can improve results significantly. We introduce conditional transformer layers for low-resolution coarse colorization in **Section 4.1**. The conditional layers incorporate conditioning information via multiple learnable components that are applied per-pixel and per-channel. We validate the contribution of each component with extensive experimentation and ablation studies that were appreciated by multiple reviewers. Our experiments can provide insight on how to effectively condition spatial information in a transformer for related tasks such as image editing and restoration.\", \"We propose training an **auxiliary parallel prediction model** jointly with the low resolution coarse colorization model in **Section 4.2**. Improved FID scores demonstrate the usefulness of this auxiliary model.\", \"**Motivation**\", \"We agree that the motivation was not clearly described in the first draft of our paper. Here we describe the motivation behind different components of our architecture. We added a summarized version discussing this towards the end of the **third paragraph in Section 1**\", \"**Motivation for axial self-attention blocks** - The main advantages of axial self-attention blocks are the ability to capture a global receptive field with only two layers and $\\\\mathcal{O}(D \\\\sqrt{D})$ instead of $\\\\mathcal{O}(D^2)$ complexity. They can be implemented efficiently using matrix-multiplications on modern accelerators such as TPUs.\", \"**Motivation for using three sub-networks**: Generating high-resolution images using only self-attention is computationally challenging and hence prior work on unconditional image generation using self-attention limits to generating small images $64 \\\\times 64$. To alleviate the inherent complexity in colorizing high-resolution grayscale images, we decompose the task into three simpler sequential subtasks: coarse low resolution colorization, color super-resolution and spatial super-resolution and use a separate network for each. This enables us to train larger models for colorization.\", \"**Motivation for the choice of Axial Transformer** - Axial Transformer is state-of-the-art in unconditional image generation benchmarks (ImageNet 32, ImageNet 64) at the time of submission without the usage of custom GPU kernels. The Axial Transformer has a semi-parallel sampling mechanism which enables us to colorize 64x64 grayscale images in 4-5 minutes P100 GPU vs PixColor that takes ~10 minutes to colorize 28x28 grayscale images on a K40 GPU. Sampling 28x28 colorizations takes just around 30 seconds.\", \"**Motivation for conditional transformer layers** - Conditioning every layer via multiple components allows stronger gradient signals through the encoder and as an effect the encoder can learn better contextual representations. This improves our results compared to a baseline Axial Tranformer both in Table 2 and Figure 3.\", \"**Clarity**\", \"In the latest version of the draft, we have restructured the writing to improve readability.\", \"We added citations to Table 2. We provide a short description of what generative model each baseline is based upon in the Related Work section.\", \"We expanded the caption in Figure 2. We expanded the \\\"Axial Transformer\\\" subsection and now explain every component of the figure (Outer Decoder, Inner Decoder, Encoder) in detail with equations. Our modifications made to the architecture are now described in Section 4.\", \"The ground-truth coarse low resolution image is both the input to the decoder and the target during training. Masked layers ensure that the conditional distribution over each pixel depends solely on information from previous ground-truth pixels.\"]}",
"{\"title\": \"Lack of clarity and novelty, weak evaluation\", \"review\": \"Thank the authors for addressing reviewers' comments extensively. After rebuttal, I agree with the significance of the proposed method in terms of performance improvement in this particular task. However, the technical novelty is still limited. Thus, I increased my rating to 5.\\n\\nIn this paper, the authors propose an autoregressive image colorization method based on self-attention. The proposed method first infers an initial low-resolution colorization in an autoregressive manner, then upsamples both spatial resolution and color depth. The authors adopt self-attention to encode contextual information of the scene. Experimental results show that each component of the proposed method is effective and the proposed method outperforms an existing autoregressive method.\\n\\nOverall, it is difficult to understand the contribution of this paper. I think it is because the writing in Sec. 1 and 2 is unclear. Particularly, the writing of introduction needs a significant improvement as the authors reveal too much details of this paper instead of describing the high-level motivation of the proposed method and the technical contribution. The clarity also needs to be improved in the method and experiment sections. (e.g. ColTran Core in Fig. 2 is confusing. It looks complicated, but the writing is too short. what is the ground-truth in the objective? what about Table 2? each baseline is not explained and cited.)\\n\\nTechnical novelty is incremental. I could not understand the motivation of the proposed network due to the clarity issue, but this paper generally adopts existing methods such as an autoregressive model and self-attention blocks to apply them to an image colorization problem, which limits the novelty of this paper.\\n\\nEvaluation is weak. PixColor is an old model (in 2017), so recent methods and state-of-the-art methods should be compared. I could not find out what the baseline methods in Table 2 are, but they do not look like state-of-the-art models. Performance gain over the previous autoregressive model using widely-used self-attention blocks is not enough for accepting this paper.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Reasonable approach / Well-written / Better than baselines\", \"review\": \"Update: I really appreciate the authors' efforts to address my original concerns. I believe that this work is a nice application of transformers to image colorization. The paper is well-written and the performance of proposed transformer architecture is strong. I think that this work is above the threshold of acceptance.\\n\\n**Strengths**\\nThe motivation of the proposed architecture is reasonable. The paper is generally well-written. \\n\\n**Major comments**\\nIt\\u2019s better to include some discussion on regularization effects from Eq. (4). Eq. (4) seems to be helpful to capture the overall structure in an image, rather than capturing only local correlation from autoregressive formulation.\\n\\nFor upsampling, do we really need to make use of an autoregressive model? A stack of transposed convolutions might be working well, because we only need to upsample input/color resolutions. I totally agree that autoregressive formulation does help achieve better results, but it may be possible to achieve similar performance by just using transposed convolutions. \\n\\nIt\\u2019s better to include more details for reproducing results. \\n(1) Even if different batch sizes used (224, 768 and 32), learning rates for all experiments are fixed 3e-4?\\n(2) How many epochs or steps are required for convergence?\\n(3) Figure 6 shows that EMA is extremely important. How about using cosine annealing for a learning rate scheduler? It may help achieve more robust FID scores without EMA.\\n(4) Compared to baselines, this approach is extremely slow due to the autoregressive sampling. It\\u2019s better to report inference time.\\n\\nI'm not sure that conditional layer normalization is indeed helpful. \\n\\n**Minor comments**\\nThe x-axis title of figure 4c (\\u201ctraining steps\\u201d) seems to be wrong.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Image colorization based on self-attention\", \"review\": \"--- Update ---\\nThe authors have addressed several concerns that I had regarding the work. While this is largely an application of a previous method, they have made some application-specific decisions in order to achieve the significant boost in performance on colorization that they saw. While the metrics for this task are much improved, there are still some things to be desired on the qualitative results (i.e. the diversity of results tends to be in blocks, as opposed to high within-image color variability). Nevertheless, I think the improvement from this approach my guide future work in this area. Given the author's responses and changes made, I have amended my recommendation accordingly. \\n\\n__1. Summary__\\nThe authors propose a method for image colorization based on self-attention largely following the architecture of the Axial Transformer (Ho et al., 2019b). This approach outperforms several SOTA colorization models on FID and human evaluation. \\n\\n\\n__2a. Strong Points__\\nThe motivation for this work is clear. Image colorization has many applications and while past approaches have significantly advanced in the past few years, there is certainly much left to be explored in this space.\\n\\nThe recap/explanation of the Axial Transformer is clear and concise. My concern (see below) is not with the articulation of this section, but more on the reliance of an approach that hasn\\u2019t been accepted via peer review. \\n\\nThe performance of this method using both FID as well as using human evaluators is compelling. \\n\\nBreaking the problem of colorization into two intermediate low resolution images is a nice approach for enabling larger models. One question would be how well a single model would perform if smaller images were all that was required. \\n\\nThe ablation studies show how different components impact the performance. \\n\\n\\n__2b. Weak Points__\\nAll three modules of this approach are based on method of (Ho et al., 2019b), which is available on arxiv, but was rejected from ICLR 2020. The current work is focused on the application of that method. This makes for a bit of a tricky situation. The description of the Axial Transformer is given in section 4, but it is only textual and refers the readers back to the pre-print for more detail. Since this is the central method of the current work, at a minimum I think it requires more explanation/justification as opposed to pointing to a work that has not been accepted via peer review. \\n\\nWhile the language of the paper is fine, the overall flow of the paper is lacking a bit of narrative. Overall I found myself having to jump around to find the definition and explanation of important things. Particularly within the description of the model, it would be good to add some language to help the sections flow- currently they feel very independent. Alternately, if maybe help if in the beginning part of the model, the different model components (fc, fs, etc.) are named there. Related, the Architecture Section feels out of place after the Model description. There are references to the attention layers in the model description which are not explored until the Architecture section. Perhaps it makes sense to put the Architecture section first because it\\u2019s addressing layers/mechanisms that span all aspects of the model. Or perhaps combining the two sections? Right now it feels like there are two methods sections.\\n\\nSome of the text around Eqn(7) seems to be missing because the sentence structure doesn't make sense.\\n\\nIt\\u2019s not clear what some of the labels in Figure 3 mean. You have to go into the text to find out what MLP 4x means, for example, and then when you find it in section 5.2, you have to go back to section 4.3 to actually understand what it means. \\n\\nThe ablation studies feel like they\\u2019re done in relative isolation. It would be useful to know, for example, how the lower performance of using the standard Axial Transformer vs. the conditional Axial transformer impacts the final results, not just that portion. The section \\u201cConditioning Details\\u201d in 5.2 just feels like a results dump. It\\u2019s unclear what motivates those particular ablation choices and what those results tell the reader more generally about this approach. Some kind of context or discussion would be useful. In general, this section feels like it\\u2019s being included just to show that ablation studies were performed without providing any greater understanding as to the approach (to potentially motivate future work or other examples, for instance). The descriptions are also very terse. If these experiments add meaningful insight to this approach, then they belong in the main text with additional explanation and discussion. If they are merely a justification that this approach works, then I would suggest moving most of this section to the appendix and using the space to give better explanation of the methods and results which are central to the application. \\n\\nSome of the models which the current method is compared to (Table 2) are not referenced to the best of my knowledge. What does \\\"CNN\\\" mean in this case? Do all of these methods use a combined spatial and color upsampling method? If not, how were they implemented? This is actually a pretty significant issue as it limits the reproducibility of the comparative experiments. \\n\\n\\n__3. Recommendation__\\nReject. While the results are compelling, the work largely relies on a method which has not been accepted via peer review. That in and of itself does not warrant rejection, but I believe it contributed to some of the difficulties in explaining the approach, the motivation behind the approach, the results of the ablation studies, etc., which make the paper extremely difficult to follow, likely difficult to build upon, and potentially difficult to reproduce.\\n\\n\\n__4. Recommendation Explanation__\\nI would argue that the main goal of this paper is to show a novel application of the Axial Transformer approach of Ho et al 2019b and this is done by adapting that method to the task of Image Colorization. I would argue the focus is around applying that method, not exclusively doing better Image Colorization, because there is no discussion around how this advances our understanding of image colorization broadly. Nevertheless, that (showing the usefulness of an approach to a new task) is a valid objective, but because Ho et al 2019b has not been formally accepted, it also somewhat then requires this work to explain and justify approaches of that work. I believe that challenge has a lot to do with some of the difficulties in the paper around the methods and experiment explanation. \\n\\nWhile the (within sentence) language is clear, the overall flow of the paper is difficult to follow. It feels like the authors were strongly up-against the page limit, so important explanation and discussion was omitted or made very terse. For example, the ablation studies, while thorough, sort of feel dumped there. There's no discussion as to why those and not other experiments were run and what the results of those experiments tell us more broadly. Similarly, the model and architecture section seem like they should be more intertwined. As another example, some of the methods in Table 2 are not referenced anywhere and it's not clear how they were used in this context (did they start with a low res image, or high-res image). That calls the reproducibility of the comparison studies into question.\\n\\n\\n__5. Questions__\\nOverall it seems like every generated image has a red, green, and blue variant. Were they sampled in a particular manner to guarantee this? Obviously it is possible to draw other samples, but do they all largely fall into one of these three coarse categories? When the performance is poor for a given sample, it usually because entire swaths of the image are being painted in with a very non-natural color (like someone\\u2019s face being green, or the entire picture having a blue-ish exposure). Can you speak to this and other common \\u201cmistakes\\u201d that are observed? How do these compare with some of the other methods you compared yours against? Are there simply fewer \\u201cmistakes\\u201d (i.e. non-natural images), or are the types of imperfections created by this approach different that would warrant different use-cases?\\n\\nIt seems like a lot of compute (16 TPUv2) was used and the batch size was relatively large. Is the large batch size necessary for obtaining these results, or could a smaller amount of compute and smaller batch size be used?\\n\\nWhy does training baselines with 2x and 4x wider MLP dimensions make \\u201ca fair comparison\\u201d? Is \\u201cBaseline\\u201d in Figure 3, x1 (standard) MLP but no conditioning? Why would x1 be better than x4, but worse than x2?\\n\\nThe caption of Figure 2 feels a bit imbalanced. ColTran core is called out specifically, but then the ColTran Upsamplers are not referenced. Is the \\u201cAxial Transformer\\u201d just the right branch of the ColTran Core (which the figure seems to suggest) or the entire ColTran core, as the caption seems to suggest.\\n\\nOn pg. 3 \\u201cColTran Core\\u201d it is stated that \\u201cwe also train a parallel prediction head which we found beneficial for regularization\\u201d. I think it would be useful to given additional explanation here as it\\u2019s a fairly significant architectural choice. If results of not including this head exist, perhaps it would be useful to show this in the appendix. Otherwise a brief explanation as to why this additional head aids the regularization would be useful. Since this is an instantiation of the Axial Transformer, is this prediction head added to that approach for this particular task, or is this already a part of the standard Axial Transformer (and therefore maintained here for consistency)? Ah, this is explored further in section 5.3\\u2026. It would be helpful to the reader to reference this section when you introduce the prediction head (i.e. that the impact will be explored in section 5.3).\\n\\nIn 4.2 it says they \\u201cadapt the Axial Transformer model for colorization\\u201d. Can you elaborate on the adaptation? It\\u2019s not clear (without looking up that reference), what belongs to the original approach vs. what was added/changed here for this specific task.\\n\\nIt feels odd to mention the number of axial attention blocks in the training section as opposed to the model or architecture. This is a fundamental architectural choice, is it not? \\n\\nWhy are the set of models compared via FID and Human Evaluation different?\\n\\n\\n__6. Feedback__\\nThe demonstrated colorization scores and output are compelling, however, I believe the structure of text is very detrimental. I think it would potentially be feasible to fully rework the text to make it more readable and reproducible and therefore a solid publication because the result is compelling, but as it stands, there is substantial rewritting which would need to be done in my opinion.\\n\\nIn \\u201cModel: ColTran Core\\u201d fc is described as a conditional, auto-regressive axial transformer. While the definition of pc and pc~ are stated thereafter, there is not any further description as to what this means and/or a citation. The Ho et al. citation is provided in the Figure 2 caption. At a minimum that citation should be given here as well, but it would be good to give a textual description as to what an \\u201ca conditional auto-regressive axial transformer\\u201d is since it is not a commonly used architecture. \\n\\nThe second paragraph under ColTran Upsamplers (In our experiments\\u2026) is slightly confusing. It seems to suggest that parallel upsampling is sufficient and advantageous for a number of reasons, but that prediction is chosen to reduce color inconsistencies. Then it seems to go back to again say that Parallel upsampling has a huge advantage of being fast. This is perhaps also confusing because there is a \\u201cSample\\u201d label in Figure 2. The confusion is less about the validity of the approach and more that the language (in conjunction with the figure) is difficult to follow for someone not already familiar with Guadarrama 2017.\\n\\nWhile not necessary, it would be interesting to see how this approach performs on out of domain images (i.e. not from ImageNet). \\n\\nIn 5.5, it\\u2019s stayed you follow the protocol used in PixColor. It would probably be best to additionally include the citation here, or the citation in place of \\u201cPixColor\\u201d even though that work is cited near the beginning of the paper (when the reader comes to this section, they may be unfamiliar with this approach and would like to go directly to that reference as opposed to having the search for \\u201cPixColor\\u201d and then go find the reference).\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Extensive experiments and strong performance, novelty is a bit incremental.\", \"review\": \"Update: Thanks for the additional ablation studies. I would like to keep my original evaluation which is acceptance.\\n\\n--------------------\\n\\nThis paper proposes a transformer architecture for image colorization. It uses an axial transformer to process the low-resolution grayscale image, and uses a conditional version of the axial transformer to predict a low-resolution color image autoregressively conditioned on the gray image. It then uses an axial transformer to predict the final high-resolution output pixels.\", \"pros\": [\"The paper is well-written and easy to read. The literature review is comprehensive.\", \"Image colorization is an important problem in computer vision. To my knowledge, this is the first paper that applies Transformer to colorization. It could potentially be very impacted and inspire future work.\", \"Both automatic metric (FID) and human evaluation are used to compare the method with existing approaches. The performance of the proposed method significantly outperforms the previous state of the art. The qualitative examples are very impressive as well.\", \"The paper performs extensive ablation studies (Figure 3) to verify the contribution of different components.\"], \"cons\": [\"The technical novelty of this paper is a bit limited. It basically applies existing conditioning techniques to the axial transformer and uses it for image colorization.\", \"It seems that no cLN (Fig. 3 mid) is better than cLN with mean-pool only (Fig. 3 right), which is a bit counterintuitive. Any possible explanation? Also, is there a reason to use the globally aggregated context for cLN but not for cMLP/cAtt? An ablation study on that would be helpful. Besides, there is an ablation study on shift-only modulation but I am curious about how scale-only modulation performs.\", \"It would be nice to show the number of parameters, training/inference speed of the proposed approach, and compare them to the baselines.\", \"Please add references to all baseline methods compared in Table 2. I'm able to find the citation of PixColor in other parts of the paper, but cannot find most of the others'.\"], \"minor_problems_that_do_not_affect_my_score\": [\"P1: determinisitic -> deterministic\", \"The aggregated context is denoted as \\\\hat{c} in Table 1 but as \\\\bar{c} in section 4.3.\", \"It would be better to use the vector format for Figure 3/4, and enlarge Figure 5 a bit.\", \"Overall, I vote for acceptance. The novelty is not huge but I still think it would be a nice paper for ICLR and have impacts on the field given its strong empirical performance.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
cef_G2hkiGc | More Side Information, Better Pruning: Shared-Label Classification as a Case Study | [
"Omer Leibovitch",
"Nir Ailon"
] | Pruning of neural networks, also known as compression or sparsification, is the task of converting a given network, which may be too expensive to use (in prediction) on low resource platforms, with another 'lean' network which performs almost as well as the original one, while using considerably fewer resources. By turning the compression ratio knob, the practitioner can trade off the information gain versus the necessary computational resources, where information gain is a measure of reduction of uncertainty in the prediction.
In certain cases, however, the practitioner may readily possess some information on the prediction from other sources. The main question we study here is, whether it is possible to take advantage of the additional side information, in order to further reduce the computational resources, in tandem with the pruning process?
Motivated by a real-world application, we distill the following elegantly stated problem. We are given a multi-class prediction problem, combined with a (possibly pre-trained) network architecture for solving it on a given instance distribution, and also a method for pruning the network to allow trading off prediction speed with accuracy. We assume the network and the pruning methods are state-of-the-art, and it is not our goal here to improve them. However, instead of being asked to predict a single drawn instance $x$, we are being asked to predict the label of an $n$-tuple of instances $(x_1,\dots x_n)$, with the additional side information of all tuple instances share the same label. The shared label distribution is identical to the distribution on which the network was trained.
One trivial way to do this is by obtaining individual raw predictions for each of the $n$ instances (separately), using our given network, pruned for a desired accuracy, then taking the average to obtain a single more accurate prediction. This is simple to implement but intuitively sub-optimal, because the $n$ independent instantiations of the network do not share any information, and would probably waste resources on overlapping computation.
We propose various methods for performing this task, and compare them using extensive experiments on public benchmark data sets for image classification. Our comparison is based on measures of relative information (RI) and $n$-accuracy, which we define. Interestingly, we empirically find that I) sharing information between the $n$ independently computed hidden representations of $x_1,..,x_n$, using an LSTM based gadget, performs best, among all methods we experiment with, ii) for all methods studied, we exhibit a sweet spot phenomenon, which sheds light on the compression-information trade-off and may assist a practitioner to choose the desired compression ratio. | [
"Pruning",
"Compression",
"CNN",
"LSTM",
"Image classification"
] | Reject | https://openreview.net/pdf?id=cef_G2hkiGc | https://openreview.net/forum?id=cef_G2hkiGc | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"W8ulsOCQHCm",
"LadIPHLEnFO",
"ZPQ6RRic1Nj",
"DlwPIx1EjQU",
"65btFXEAd7",
"NncE8reODbu"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040514336,
1604676118419,
1604272570944,
1604023410594,
1603753966315,
1603746732939
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3387/AnonReviewer5"
],
[
"ICLR.cc/2021/Conference/Paper3387/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3387/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3387/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3387/AnonReviewer4"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This paper tackles the problem of classifying a set of points given the knowledge that all points should have the same class. There seems to be a consensus among the reviews that under the assumptions made, the authors provide thorough experiments convincing that their method is useful. However, the paper has two weaknesses that are too strong to ignore. First, the clarity of exposition seems to be lacking, specifically a clear motivation for this new setup as well as its connection to the abstract problem being solved. Second, the assumptions made seem to be too strong, and the solution seems to rely on these strong assumptions too much.\"}",
"{\"title\": \"Recommendation to Reject\", \"review\": \"The authors study how to improve the prediction and pruning performance with additional information generated by labels in the shared-label classification problem. As a starting point, the authors consider a simple scenario where side information can be extracted from the same labeled batch. To train the neural network, the authors use a balanced loss consisting of a weighted sum of general cross-entropy and cross-entropy of average batch prediction. The authors also suggest a new CNN-LSTM architecture to improve predictive performance to exploit the side information. The experiments section shows the proposed method performs well and achieves a high compression rate.\\n\\nThe data model used in this study is different from the common classification problem. This paper assumes that n-data points give side information with the shared-label batch, referred to as \\\"n-tuple.\\\" In general, classification problems have labels independent of other data points. \\n\\n[Strength]\\n \\nThe authors study a general relationship between pruning and additional side information for the shared label problem. The authors define \\\"relative information\\\" to measure the appropriate compression rate for various prediction performance and pruning levels. Using the relative information, we can dramatically reduce the number of parameters while maintaining prediction performance. Besides, the proposed CNN-LSTM architecture improves the prediction performance with the shared-label training scheme.\\n\\n[Weakness]\\n\\nThis paper considers a very different data model which never been studied before. The shared-label model should be motivated very well. I'm not convinced why we have to study this model.\\n\\nThe explanation of which benefits the network from the side information is ambiguous. There is no theoretical and empirical explanation of how the side information and balanced loss can discriminate ineffective parameters. It requires providing more clear evidence, such as the statistics of network parameters before and after the pruning algorithm.\\n\\nThe existence of optimal compression rate \\\\rho^* should be discussed more rigorously if possible with a theoretical proof based on relative information. \\n\\nIn the experiment section, only the 5 Conv net was investigated to check the effectiveness of pruning. It would be more convincing if the authors can add results for the proposed CNN-LSTM network.\\n\\n[Minor Issue]\\n\\nIt is challenging to read Fig1-9 because of their interpretation. I suggest the authors use other colors with a bigger font size.\\n\\nIt would be better to use tables rather than graphs to present many experimental results. Graphs are too small to understand\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Rigorous experimental results, but insufficient motivation for proposed problem\", \"review\": \"More Side Information, Better Pruning: Shared-Label Classification as a Case Study\", \"summary\": \"The goal of this paper is to use side information about a task to prune models more effectively i.e., with minimal loss in performance as compared to original model.\\nThe particular type of side information they focus on is prior knowledge about a collection of instances sharing a class label. The motivation is an Information Retrieval Scenario, wherein it is expensive to identify relevant examples for a query, therefore, an approximate, cheaper model is used to identify good candidates that have a high likelihood of being relevant. The original model is then computed only on the identified subset. The paper then switches to using a related, toy problem where the goal is to predict an unknown, shared label for a given tuple of n items.\", \"the_paper_describes_4_methods_to_exploit_the_additional_information\": \"(i) a baseline method that trains in a standard fashion and computes the label of each example in an n-tuple independently. (ii) Balanced method: Sum of standard classification loss and cross entropy computed with respect to average labels per batch\\n(iii) Graph method based on GNNs (iv) CNN+LSTM architecture, where n-tuples with shared labels are treated as a sequence passed to an LSTM.\\nFinally, the paper proposes a relative information metric that measures the tradeoff of information gain vs the computation cost. Empirical results are presented comparing the various methods, showing the relationship between relative information, compression ratios and n.\", \"strengths\": \"1. Authors propose a novel quantitative metric (relative information gain) in measuring the loss of performance in pruned models vs computational cost. This gives practioners a tool to clearly think about tradeoffs in cost vs model certainty.\\n2. Paper provides rigorous experimental results. Each proposed method is compared under various compression ratios. Relative information is shown to be correlated with both compression ratio and n (number of examples in a tuple with shared label). For a given compression ratio, relative information is shown to improve with increasing n until a \\\"sweet spot\\\" is reached, beyond which relative information starts degrading.\\nRelationship between accuracy at various compression ratios and FLOPs is also reported clearly.\", \"weaknesses\": \"1. The paper is missing a clear description of real-world applications. \\na. The original motivation is a very interesting problem, wherein only an approximate function can be computed generating a subset that *possibly* share the same label. However, the actual task the rest of the paper focuses on is materially different wherein n examples are known to share the same label. Can the authors describe a real-world scenario where one is guaranteed to receive n examples at a time belonging to a single class. \\nb. If these n examples come from an approximate classifier as in the original motivating scenario, how do the methods described in this paper handle \\\"within-tuple\\\" uncertainty, i.e. uncertainty that all the examples indeed belonging to the same class. If we had reasonable certainty in all examples having the same label, then why do we need another more complex/expensive classifier to be applied subsequently?\\n2. The methods are not clearly described.\\na. For the balanced method, a portion of the loss per batch is cross-entropy between average label of an n-tuple and the true label of the n-tuple, averaged across k n-tuples in the batch. This is not clearly described at all and required many readings. Notation is unclear: both $k$ and $l$ are used to describe the number of n-tuples in a batch. k is also used to describe the number of data points belonging to a single label i. In that case, different labels $i$ have different sizes $k_i$. After k is introduced, it is not used at all.\\nb. Graph method is described in 1 sentence. Some information about the architecture is in the appendix which are not very helpful either. Paper needs to be self-contained. What is the input graph passed to the GNN? Assuming, all examples with the same label are represented as fully connected subgraphs, does the complete dataset comprise of several disconnected components. Is the performance impacted by number of subgraphs/labels?\\nc. In the LSTM based method, an n-tuple is treated as a sequence, so that LSTMs can used to capture the fact that the examples in an n-tuple are related. This is unusual usage, since LSTMs would only be able to model local neighborhood in practice. Paper claims that ordering examples by certainty gives improved performance, and supports this claim with empirical results. It is unclear, how well the models cope with errors in confidence or uncalibrated models. The current set of experiments do not address this key factor that would occur in any practical setting.\", \"conclusion\": \"My recommendation is to reject the paper at this time, because the problem statement is not well-formed. Specifically, how the methods handle uncertainty of labels within a tuple. This is especially confusing given that LSTM-based solution is found to be the best empirically. However, how would an LSTM perform if intermediate examples are in fact mislabeled. Additionally, the explanation of the methods is unclear. \\n\\nI would encourage the authors to make their future work stronger by grounding the work in a real problem. The IR example cited at the beginning is a good one. If this problem is solved as is, this work can be very impactful. The simplifying assumptions made at the moment weaken the problem statement. Therefore, even though the experimental results are thorough, their application to any practical scenario is not obvious.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Very confusingly written paper\", \"review\": \"The paper proposes to use a set of input examples, x1 to xn, having a common label y, and use them together for better classification. And use these shared label examples as additional information during model pruning.\", \"i_have_multiple_challenges_to_understand_this_research_paper\": \"1. Clarity of Writing:\\nThe paper is very tough to read and understand. The authors jump through multiple levels of motivations, starting with information retrieval and then to approximation of database queries. But then the rest of the paper talks about loss averaging and results are shown using a CNN model on CIFAR10, and TinyImagenet. Either the whole motivation on IR aspects can be removed or relevant experiments and approach be proposed. At multiple places, I am either lost or confused on what is the problem that the paper is trying to solve. Example, after reading 2 pages of the paper, the authors state, \\\"We depart from the original motivating information retrieval scenario, and henceforth consider a simpler, toy problem which we call the shared-label prediction problem.\\\"\\n\\n2. Misleading Title and Takeaway in the paper:\\nThe paper title, abstract, and motivation says to use \\\"More Side Information\\\". While shared-labels is not side information or additional information. If you have 10 instances to classify, instead of classifying them independently, the paper is trying to classify them together. So, there is no side information used in the approach. Also, this is not a \\\"structure\\\" present in X.\\n\\n3. Incorrect or Insufficient assumptions:\\nThe paper makes a lot of strong assumptions, which are not practical:\\na. What if multiple instance, x1 to xn of the same class label is not available ? In few shot learning or one shot learning scenario. \\nb. \\\"We assume the network and the pruning methods are state-of-the-art, and it is not our goal here to improve them\\\" - I do not understand the need for introduction model pruning for shared label classification. If the goal is not to improve pruning methods, then why do network pruning, at all?\\nc. \\\"For all methods we study, fixing the information size n, our experiments suggest that there exists a sweet spot phenomenon, or a \\\"compression threshold\\\" in the sense that RI, as a function of \\u03c1, has a global maximum \\u03c1\\u2217\\\" - There is no proven approach that such a threshold should exist for any dataset/model combination. I do not agree to this assumption.\\n\\n4. Lack of Novelty:\\nMost of the approaches explained in Section 4 are just averaging the loss of the tuple of samples. When we average the samples, it is automatically considered that the tuple of samples are drawn from i.i.d. And in the proposed CNN-LSTM approach, the tuple of samples are \\\"sequentially\\\" classified. That raises more questions than answers - why sequence, and in what sequence? While the paper does not discuss any of these important questions.\\n\\n5. Appendix has more information than the paper itself:\\nMost of the detailed and important information about the paper, including primary details about the approach, the model architecture, and many experimental results are in the appendix. At many instances, the paper reads like an index to the appendix. Example, section 4.3, \\\"We investigate two kinds of architecture, see Appendix B for further information\\\". Throughout the paper we do not have information the architectures. Even the proposed approach in 4.4, \\\"An illustration of this architecture is shown in Appendix A.1\\\"\\n\\n6. Lack of experiments:\", \"the_proposed_approach_of_cnn_lstm_is_comapred_with_baseline_methods\": \"loss averaging, graph based averaging of loss. And the paper claims that the CNN-LSTM approach is better than the baseline methods. This is insufficient. The paper fails to compare with other relevant techniques in literature and place the paper empirically among the others papers in the literature.\", \"rating\": \"2: Strong rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"The paper discusses empirically an interesting network pruning workflow with side information provided by data instances sharing the same class labels.\", \"review\": \"This paper discusses empirically a workflow about how to compress network architectures with side information. The side information defined in this study is provied by the training data instances that share the same class labels, aka the problem of shared label prediction.\\n\\nThe contribution of this work can be concluded as\\n1. This work defines a set of benchmarks measuring efficiency of shared label prediction, including relative information and information gain. These two metrics are used to measure how much information can be learned by the compressed neural networks over the data instances sharing the same label, given the compression ratio and the name of training instances \\n2. This work empirically unveil the sweet spot phenomena, which indicates how relative information varies in an montonical way with respect to different compression ratios. \\n3. This work proposes to make full use of the training instances sharing the same class label via a combination of CNN and LSTM. CNN is used to extract features and LSTM is used to encode the correlation between different instances sharing the label. Experimental study confirms the merit of the proposed workflow. \\n\\nOverall, this paper is well explicated, starting with clearly written background on basic concepts and prior work, stating clear the algorithmic design and conducting correspondingly the experimental study to confirm the benefits of the algorithm.\", \"there_are_several_downsides\": \"1) no theoretical discussion about the sweet spot phenomena is given. The turing point shown in this observation is very interesting. If we can provide an estimate about when the turining point appear (given a compression ratio and a base neural network architecure), that would be very useful for guding practical network compression tasks. \\n\\n2) Can we still observe the sweet sopt thing for the proposed CNN + RNN workflow ? It looks like Figure 8 and 9 both assume the compression ration is fixed. \\n\\n3) For the RNN architecture, why does the confidence ordering based RNN perform better than random ordering? \\n\\n4) When n =1, it is not surprising to find the balance method has a worse performance than the baseline method. When n = 1, The average batch prediction includes a blurred prediction result by taking the average of k training instances (when n = 1, each n-tuple contains only 1 instance). Thus the average same label loss should be noisy.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Lacking clarity\", \"review\": \"Summary\\n\\nThis paper introduces the problem of shared-label prediction -- the problem of classifying the (common) label of a set of points conditioned on the knowledge that they all share the same label -- and suggests various methods that take advantage of side information to solve it.\\n\\nRationale for Score\\n\\nI think that the idea of using side information for pruning is promising overall. However, given the lack of clarity in the exposition -- e.g., in motivating and defining the problem (and its connections to pruning) and conveying & contextualizing the paper's contributions --, I am unable to fully understand and appreciate the significance of this work.\\n\\nStrengths\\n\\n- The shared-label prediction problem introduced seems interesting and made me initially wonder whether it could be applied to, e.g., accelerating pooled-testing methods for Covid-19 (with neural nets) where a small partitioned group of samples may share the label (negative) with high probability\\n- Using side information for pruning is a nice idea that may be appreciated by the pruning community\\n- There are some empirical evaluations of the proposed approaches\\n\\n\\nWeaknesses\\n\\n- The abstract is very long, takes up almost the entire first page, and reads more like an introduction than a proper abstract. Despite its length, the proposed method or novel idea of the paper is not revealed, but rather only merely hinted at as \\u201cwe propose various methods for performing this task.\\u201d\\n- The problem that this paper attempts to solve is not very well motivated by a concrete real-world application as the abstract suggests. As someone that is not very familiar with the problem tackled by the paper, I am left trying to think of a scenario where we *know for a fact* a priori that the group of points we are feeding as input to the neural network share the same label, but yet do not know the label itself.\\n- The problem definition (and consequently, my sense of the paper\\u2019s contribution) is very confusing. There are numerous \\u201cproblems of interest\\u201d introduced in the paper right off the bat and it is very difficult to discern the particular problem that serves as the main focus of the paper. In particular, the paper starts off in Sec. 1.1 with a very generic problem of trying to maximize the ground-truth \\u201cretrieval function given a static space of objects.\\u201d This is then relaxed to using approximations of the ground-truth retrieval function in terms of a neural network, which is then reformulated as the problem of using a pruned network to construct a shortlist of candidates for which the original network is used. Then, the authors claim that elements of this shortlist have similarity that we can exploit (for reasons that are not clear to me); unfortunately, immediately after the problem is then recast again as one of classification where the input space is partitioned into k clusters, each consisting of points of the same label, and the objective is to classify the label of each cluster using n random samples using the information that all points in the cluster share a label. This is then somewhat more rigorously defined as the \\u201cshared prediction problem\\u201d in Sec. 1.2 and set as the target problem that this paper tackles.\\n\\n Overall, I found this exposition quite confusing and not very well-introduced or motivated by a real-world application (as the authors had hinted at in the abstract). Since the problem in question is quite general, I am also not sure why the pivot of the paper is to emphasize pruning in the introduction, rather than as a potential application of the proposed approach in, e.g., the experiments section. Since the shared prediction problem is more concrete relative to the more general problem that the authors start off with in Sec. 1.1, I would recommend simply defining the shared prediction problem (the most concrete of them all) first, rather than starting with the most abstract problem.\\n- The definitions for the variables used are ambiguous and defined way later after being used. For example, the compression ratio \\\\rho, which is used as early as Sec. 2, is not formally defined until Sec. 4. This might have been fine if the definition of the compression ratio was consistent with that of existing work -- e.g., (# of parameters in the original network) / (# of parameters remaining in pruned network) -- however, it turns out that, as defined much later on in Sec. 4, it is defined as \\u201clogarithm to the base 10 of the number of parameters in the original network divided by the number of parameters remaining after pruning.\\u201d This definition is quite confusing and misplaced under the section \\u201cOur Methods.\\u201d For clarity, I would recommend defining the variable earlier on when it is used in Sec. 2 (to define the problem), and using actual math to define the compression ratio, among other pertinent variables.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
BW5PuV4V-rL | Gradient-based training of Gaussian Mixture Models for High-Dimensional Streaming Data | [
"Alexander Gepperth",
"Benedikt Pfülb"
] | We present an approach for efficiently training Gaussian Mixture Models by SGD on non-stationary, high-dimensional streaming data.
Our training scheme does not require data-driven parameter initialization (e.g., k-means) and has the ability to process high-dimensional samples without numerical problems.
Furthermore, the approach allows mini-batch sizes as low as 1, typical for streaming-data settings, and it is possible to react and adapt to changes in data statistics (concept drift/shift) without catastrophic forgetting.
Major problems in such streaming-data settings are undesirable local optima during early training phases and numerical instabilities due to high data dimensionalities.%, and catastrophic forgetting when encountering concept drift.
We introduce an adaptive annealing procedure to address the first problem,%, which additionally plays a decisive role in controlling the \acp{GMM}' reaction to concept drift.
whereas numerical instabilities are eliminated by using an exponential-free approximation to the standard \ac{GMM} log-likelihood.
Experiments on a variety of visual and non-visual benchmarks show that our SGD approach can be trained completely without, for instance, k-means based centroid initialization, and compares favorably to sEM, an online variant of EM. | [
"Gaussian Mixture Models",
"Stochastic Gradient Descent",
"Unsupervised Representation Learning",
"Continual Learning"
] | Reject | https://openreview.net/pdf?id=BW5PuV4V-rL | https://openreview.net/forum?id=BW5PuV4V-rL | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"H1yFAS5fZ4M",
"uj6CJXFgkD5",
"klGYaZZ_SBx",
"rrchfQVnfda",
"7nTkNmO6k1",
"rQeDtQDJDEt",
"QXMhJ9K9874",
"a_qbJG_-RDN",
"ZMWM5Diy8h2",
"m6aeLE-0Be8",
"BKluMxDCrK"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040484376,
1605093035660,
1605092959042,
1605092873280,
1605092772577,
1605045935008,
1605034957620,
1604171332070,
1604031934232,
1604026762483,
1603981569296
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3386/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3386/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3386/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3386/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3386/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3386/AnonReviewer5"
],
[
"ICLR.cc/2021/Conference/Paper3386/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3386/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3386/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3386/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This paper proposes training Gaussian mixture models using SGD, creating an algorithm appropriate for streaming data. However, we feel that the current manuscript does not sufficiently support the proposed method, and lacks insight into its workings. The reviewers believed the method lacked justification (while the authors claim to have added theoretical justification to the revised manuscript, I did not see any such new theory), and were not convinced that the method offered a significant improvement on existing methods.\"}",
"{\"title\": \"Author reply for initiating discussion\", \"comment\": \"Thank you for the constructive review! As this is an open discussion phase, we would value your feedback to our replies to better understand how we can improve the paper. We will incorporate the obvious improvements right away, and the rest as a function of the feedback in the open discussion phase.\", \"concerning_your_remarks\": \"4. This is a very fair point, thanks for pointing it out: The experiment 4.4 is purely empirical and the claimed justification for the observed effect (which is plainly there) is not very rigorous. On the other hand, adding, e.g., an EWC term to the loss or performing generative replay would be beyond the scope and space of this paper. As the message of this experiment is not really a key point of the paper, we might simply drop it, and the associated claim, in favor of elaborating more on the key claims (cf. you other suggestions) and treat incremental learning for GMMs in a separate paper. What do you think? --> Update: Ok we went ahead and did it anyway\\n \\n 5. We have added empirical results (left col in Tab.3) that show that this approximation holds, in all experiments, to an extremely high degree of precision, and a theoretical justification of this fact. As for the approximation itself, we elaborate on p. 3, Sec. 3.2, how this max-component approximation is derived from the \\\"classical\\\" GMM log-likelihood. Is that not exactly what you mean by \\\"how it is related the classical GMM, to what extent it is an approximation\\\"? \\n \\n 6. We have added a new experiment (A.6) showing that higher K is helpful for density estimation, and a discussion of finding a good K for clustering (discussion section, A.6).\", \"to_respond_to_your_concerns\": \"1) If we use the trained GMMs for clustering, we would use the conventional approaches developed for GMM/k-means to determine an optimal number of components. The fact that we train GMMs by SGD (instead of sEM or EM) should not impact these methods in any way, and thus there is not need for new contributions here (--> p8, discussion of \\\"hyper-parameter selection guidelines\\\")\\n 2) When performing density estimation, the number of mixture components follows a \\\"the more the better\\\" principle as elaborated on p. 8 (\\\"Hyper-parameter selection guidelines\\\").\"}",
"{\"title\": \"Author reply for initiating discussion\", \"comment\": \"Thank you for the constructive review! As this is an open discussion phase, we would value your feedback to our replies to better understand how we can improve the paper. We will incorporate the obvious improvements right away, and the rest as a function of the feedback in the open discussion phase.\", \"concerning_the_points_you_make\": \"(1): In streaming settings, prior, data-driven initialization is impossible because data are not (yet) available. Even if they were, k-means scales very badly for huge high-dimensional datasets so it becomes impractical. And the experiments of Secs. 4.1 and 4.3 clearly show that data-driven and random initializations perform equally well. So the advantage is: we can achieve the same results as with k-means, but in scenarios where k-means is unfeasible or impossible. \\n \\n (2): We feel that SGD requires no special mathematical analysis here, and that is probably not what you meant anyway. For the annealing, please refer to the appendix A.2 for an analysis. We show that $\\\\sigma$ always imposes an upper bound on the loss we are optimizing, and that we can improve the loss by decreasing $\\\\sigma$ except when $\\\\sigma \\\\rightarrow 0$, demonstrating that adaptive annealing is stable. We could move this to the main text, would you consider that helpful?\\n \\n (3): Thank you for pointing this out, we have stated this more clearly (top of p8).\\n We do NOT wish to show that our SGD can outperform sEM by a large margin, nor do we think this is generally possible. After all, sEM makes use of strong theoretical guarantees that do not hold for SGD. However, SGD can do just as well as sEM, or slightly better, which is surprising given the reasons just mentioned.\", \"there_is_one_aspect_where_sgd_outperforms_sem_by_a_large_margin\": \"for very high-dimensional data like SVHN and Fruits (Tab. 3, discussed at the top of p. 8). Here, sEM does not converge well without using k-means but SGD does and performs much better. We have added another Figure to the main text(Fig.3), plus some text (p7) explaining why this is the case.\\n \\n (4): We agree. This experiment is just an \\\"interesting fact\\\" without any theory behind it, and not extremely important for the message of the paper. It might be better to skip this experiment in favor of some more analytical contribution you mentioned earlier. What do you think? --> Update: in absence of a reply, we did it anyway!\"}",
"{\"title\": \"Author reply for initialing discussion\", \"comment\": \"Thank you for the constructive review! As this is an open discussion phase, we would value your feedback to our replies to better understand how we can improve the paper. We will incorporate the obvious improvements right away, and the rest as a function of the feedback in the open discussion phase.\\n\\nAnswer to the general text(\\\"lack of clarity\\\"): \\n \\n Could you please elaborate where precisely you do not find our paper sufficiently clear and what we could do about that? Your statement (\\\"However I feel that ...\\\") sounds like a summary judgement of the whole paper, and it is hard to improve the paper based on that. In Sec. 1.3 (contributions), we present a bullet-list of contributions: if we tried to put this more concisely and reduce the number of bullet points, would that make the paper more clear?\", \"concerning_the_specific_comments\": \"1. Point taken, we rephrased the contributions to that effect. To answer your concerns: Annealing is not novel or remarkable in itself. What is novel is its use in GMM/SGD training which has never been proposed before, and which gives good results.\\n \\n 2. First point: the max-component approximation is more loose than Jensen (easy to show). Second point: it is still an extremely good approximation for high-dimensional data. We measured that all GMM responsibilities are > 0.99 throughout all experiments, and we qdeed q new experiment to the paper showing these results (--> middle of p.7 and Tab.3) which justify our approximation. Since this can be expected to hold only for high data dimensions, the paper is restricted to the high-dimensional case (cf. paper title). \\n \\n 3. Interesting point: we will add a statement to the discussion elaborating on this in the final version (more space). To answer the question here: this might work for the degenerate solutions but not for sparse-component ones. The sparse-component solutions seem to represent very broad basins of attraction, so perturbing the parameters will in general only lead back to sparse-component solutions. \\n \\n 4. Very good point, thank you: We added a simple experiment showing the value of annealing to the experiments section (--> 4.2). To answer the question: The value of annealing is established by the fact that training never converges when not using annealing (i.e., having a small constant $\\\\sigma$), independently of the used dataset. This can be observed in the prototypes but also from the loss function values which are consistently higher with annealing turned on.\\n \\n 5. Point taken, notation has been be improved, mainly in Algo. 1\\n \\n 6. Thanks for pointing this out, we have made this more precise (--> Sec. 4). To answer the question: identical means \\\"identical hyper-parameters but different seeds for random initialization\\\", so as to exclude/reduce the impact of a particular random initialization (similar as for DNN training).\"}",
"{\"title\": \"Author Reply for initiating discussion\", \"comment\": \"Thank you for the constructive review! As this is an open discussion phase, we would value your feedback to our replies to better understand how we can improve the paper. We will incorporate the obvious improvements right away, and the rest as a function of the feedback in the open discussion phase.\", \"response_to_the_questions_in_the_text\": \"K-means initialization would require having all the data at your disposal which is excluded in streaming settings so we simply cannot do that. And k-means does not scale well to huge high-dimensional datasets which is another point against it in the targeted scenario.\", \"responses_to_the_mentioned_issues\": \"1.1 We chose the number of epochs large enough such that annealing always converged in all experiments, for simplicitly. In practice, one would terminate training once a certain annealing radius is reached --> extremely simple scheme, thus of advantage. We added a statement to that effect to the experiments section (--> 4.4)\\n\\n 1.2 We added experimental results (--> Tab.3 ) showing that responsibilities are peaked to an extremely high degree (always > 0.99) for all datasets. The reason is the high data dimensionality which leads to large inter-component distances, so the distance to the closest centroid is much larger than the distance to the next-closest one. Since this does not hold for low data dimensions, the paper is restricted to the high-dimensional case (cf. paper title)\\n\\n 2. Please look on page 3, at the end of Sec. 3.2, we do give a concrete example for numerical instabilies (underflow: a value is taken to be 0 when it is not) which can easily lead to NaNs in later stages. Is that the example you wanted to see?\\n \\n 3. Please see p. 5, at the very beginning of Sec. 4: we clearly state that the mini-batch size is always set to 1, in all experiments, to closely emulate a streaming setting (cf. paper title). \\n SGD is always applied after a sample is processed, for each sample in 2 epochs. Not sure whether this was what you meant by your question, if not: could you please elaborate more?\"}",
"{\"title\": \"Could you please elaborate?\", \"comment\": \"Thank you for taking time to perform a review of our paper! In order to start a discussion and/or to start improving our paper, could you please list all of the \\\"unsupported claims and contribution(s)\\\" that you believe we are making \\\"throughout the paper\\\"? And could you please elaborate a bit more on what exactly you dislike about the concept shift experiments? At this detail level, we cannot be sure what exactly it is you are criticizing here, and of course we would like to improve the paper according to your suggestions.\", \"just_a_word_on_concept_shift_experiments\": \"these are empirical findings. We train with one set of classes, then retrain with another one, and observe what happens. It is not a key point of the paper, nor do we claim that it is a huge theoretical breakthrough, it is just an interesting fact.\"}",
"{\"title\": \"The paper proposes a SGD based method to learn GMM in non-stationary and high dimensional setting. The paper is tackling an interesting problem however the contributions of the paper is not clearly supported.\", \"review\": \"A major concern about the paper is related to the unsupported claims and contribution throughout the paper. For example, the way the training copes with distribution shift or alleviate forgetting is not clear or elaborated on. Beyond the abstract and before the empirical validation no theory or justification is provided to substantiate this claim. The idea of the paper and the motivation are very interesting. The experiments look convincing. Writing and presentation are a good start point for improving the paper.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Add annealing feature so that SGD can solve max-component log-likelihood approximation of GMM, but still need more discussion in depth\", \"review\": \"The authors propose a technique to training GMM using SGD instead of (s)EM. The major contributions are:\\n1. a proposal for numerically stable GMM training by SGD\\nthis is achieved with max-component log-likelihood approximation and an annealed process to smooth the SGD.\\n2. an automatic annealing procedure that ensures SGD convergence from a wide range of initial conditions without prior knowledge of the data (e.g., no k-means initialization) which is especially beneficial for high-dimensional data\\nSection 4.4 has shown that such annealing hyper-parameter can control re-learning and retention process, which is good.\\nHowever, section 4.1 has shown that with or without random initialization does not impact the perf too much, then why not just use k-means initialization? is it very costly? I believe k-means initialization is also a randomized process.\\n\\nHowever, there are some issues:\\n1. When comparing SGD vs sEM, two strong assumptions are made: \\n1) SGD annealing has converged and \\n2) GMM responsibilities are sharply peaked so that a single component has responsibility of around 1\\nIt basically requires the data does not have a lot of noise, where each point can be assigned with to a label with a dominating probability, so what happens if the data has some noise, and how will such solution reacts for different level of noise?\", \"a_second_question_is\": \"is there any theory or bounds to support the convergence assumption? all experiments do SGD for 2 or 3 epochs, how will the loss be like after 3 epochs?\\n2. No actual examples or discussions are given in terms of numerical instabilities when data dimension is high\\n3. For section 4.3, the streaming scenario is not clear, is mini-batch size constant? random? if fed data of one sample, is the SGD stilling running for 2 epochs? or halting and wait?\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Paper investigates some key research questions in training mixture models (GMMs) but lacks clarity\", \"review\": \"The paper proposes a new approach to train GMMs using SGD under a variety of settings (streaming, concept drift, etc) addressing the issues of catastrophic forgetting, problem of parameter initialization and numerical instability.\\n\\nThese are all important and interesting challenges that can help advance the training of latent variable models in general. However I feel that paper is not presented with sufficient clarity to pass the ICLR bar and the exposition could be greatly improved. It is hard for me to grasp the key findings or takeaways in the paper wrt to other existing baseline methods.\", \"i_also_have_some_specific_comments\": [\"The contributions sub-section in 1.3 are vague to read, instead of simply saying - \\\"a novel method\\\", \\\"an automatic annealing procedure\\\" it would be useful to explicitly state what is novel / automatic etc - this would make it easier for the reader to understand the novelty/exact technical contributions made by the authors\", \"In section 3.2, I am curious how the max-component approximation lower bound compares to the other lower bounds used in EM (e.g. Jensen derived lower bound or evidence lower bound in Variational Inference)? To me the presented lower bound seems like inefficient and loose (e.g. if the values are close in magnitude, then intuitively the gap between the sum and max is going to be very large). Maybe I am mistaken in understanding this bound, but it would need some more explanation and it should be contrasted with the vanilla lower bounds used in EM.\", \"In section 3.3, the authors mention updating a subset of components to break the symmetry. I am curious how a baseline of simply perturbing the GMM parameters randomly would perform since that would also help break the symmetry?\", \"I liked the idea of using annealing to avoid local optima and this seems to be one of the key contributions of this work in my opinion. One of my main questions here is: how did the authors measure the value addition of annealing? Did they compare the final solutions obtained by the proposed approach with a baseline (without any annealing) that uses random initialization to deal with local optima?\", \"Notations in the paper could be improved further to make it more readable. For e.g. in Algorithm 1, the iteration steps are missing in the updates (would be good to use t=1, 2, ..T and use them in the update equations since it is not clear which iterate time steps the parameters on the RHS belong to). Also, maybe I missed this while reading but prec_clipping(..) doesn't seem to be defined near the algorithm section.\", \"I found some vague statements in the Empirical section which raises lot of follow up questions. Would suggest making the description and analysis of the results more precise. e.g. In the beginning of section 4, authors say: \\\".. repeated 10 times with identical parameters..\\\" -> what does identical mean? what was varied?\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"This paper proposes an annealing stochastic gradient descent (SGD) approach for efficiently training Gaussian Mixture Models on non-stationary, high-dimensional streaming data. Although it is well organized and the idea is acceptable and the traning process is clearly described, I still concern its originality and effectiveness. Moreover, its functions for non-stationary, high-dimensional streaming data are not analyzed and tested deeply.\", \"review\": \"It is clear that efficiectly training Gaussian Mixture Models with SGD on non-stationary, high-dimensional streaming data is very important for practical applications. The aim of this paper is quite good. In fact, this paper proposes an annealing mechanism for the SGD algorithm and makes the experiments to compare the proposed training algorithm with sEM algorithm on several real-world datasets. However, I have following major concerns:\\n(1). The proposed training scheme does not require data-driven parameter initialization (e.g., k-means) , but the data-driven parameter initialization can improve the efficiency. So, I cannot consider this is an advantage of the proposed scheme. \\n(2). The proposed annealing scheme is straightforward\\uff0cand there is no deep analysis on its performance.\\n(3). According to the experimental results, I cannot find out that the proposed training scheme is remarkably better that sEM. By the annealing procedure, the clustering results should be improved much better.\\n(4). The authors claim that the proposed training scheme is good for non-stationary, high-dimensional streaming data, but there are not much analytic results with the dimenionality and non-stationary streaming data. The forgetting results by controlling the annealing parameter is too rough.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"online gaussian mixture model learning with sgd and smoothed max-component log-likelihood\", \"review\": \"This paper presented a stochastic gradient descent approach to learn a non-stationary high-dimensional Gaussian mixture model from online data. The authors identified 3 challenges - local optima, numerical instability, and catastrophic forgetting, and proposed to address these challenges respective with adaptive annealing, exponential-free approximation, and adaptive SGD learning rate. The proposed approach is demonstrated with several vision/non-vision tasks.\\n\\nOverall, I feel that the paper is slightly below the borderline. There lacks some theoretical analysis of the proposed ideas, an approach to identify the number of mixture components, and an argument as to why GMM is preferred instead of other representation learning techniques.\", \"pros\": [\"Interesting combination of new research trends (continual learning) and old models (GMM).\"], \"cons\": [\"Lack of an approach to identify the number of mixture components\", \"Lack of theoretical justification about the max-component approximation and soft max-component approximation.\", \"Lack of demonstration on why catastrophic forgetting is avoided and how nonstationary data affects this and other algorithms in experiments.\"], \"so_the_following_improvements_could_improve_my_ratings\": [\"An empirical analysis of how the proposed approach avoids catastrophic forgetting with nonstationary data, and a theoretical analysis/comparison between existing approaches, such as regularization, memory replay, and network morning.\", \"A mathematical justification about the soft max-component approximation: how it is related the classical GMM, to what extent it is an approximation.\", \"A theoretical or emprical approach to adapt the number of mixture components, for example, through introducing a prior.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
IkYEJ5Cps5H | Succinct Network Channel and Spatial Pruning via Discrete Variable QCQP | [
"Yeonwoo Jeong",
"Deokjae Lee",
"Gaon An",
"Changyong Son",
"Hyun Oh Song"
] | Reducing the heavy computational cost of large convolutional neural networks is crucial when deploying the networks to resource-constrained environments. In this context, recent works propose channel pruning via greedy channel selection to achieve practical acceleration and memory footprint reduction. We first show this channel-wise approach ignores the inherent quadratic coupling between channels in the neighboring layers and cannot safely remove inactive weights during the pruning procedure. Furthermore, we show that these pruning methods cannot guarantee the given resource constraints are satisfied and cause discrepancy with the true objective. To this end, we formulate a principled optimization framework with discrete variable QCQP, which provably prevents any inactive weights and enables the exact guarantee of meeting the resource constraints in terms of FLOPs and memory. Also, we extend the pruning granularity beyond channels and jointly prune individual 2D convolution filters spatially for greater efficiency. Our experiments show competitive pruning results under the target resource constraints on CIFAR-10 and ImageNet datasets on various network architectures.
| [
"Network Pruning",
"Channel pruning",
"Spatial pruning",
"Network Compression",
"MIQCQP",
"Specified target resource constraint"
] | Reject | https://openreview.net/pdf?id=IkYEJ5Cps5H | https://openreview.net/forum?id=IkYEJ5Cps5H | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"PZ91G6umkmK",
"l5MFnFkR24F",
"xUQ0HCMBnH",
"hZp9wy2ct_Q",
"JunV4nkZ8ga",
"_EFcrSJCMlp",
"_Ts7BVEm5Z",
"tFCLoJiWefe",
"mxEswI8Ggxi",
"hpDNAcfGwwh",
"g2cLyBv_lV7"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040465895,
1605885513224,
1605884608398,
1605873793479,
1605856548113,
1605853667871,
1605850758519,
1604660216582,
1603902154868,
1603610192736,
1603323683541
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3385/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3385/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3385/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3385/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3385/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3385/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3385/AnonReviewer5"
],
[
"ICLR.cc/2021/Conference/Paper3385/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3385/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3385/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This paper proposed a new optimization framework for pruning CNNs considering coupling between channels in the neighboring layers. Two reviewers suggested acceptance and two did rejection. The main concerns of the negative reviewers are (a) limited novelty, (b) limited performance metrics and (c) limited baselines. The authors' response did not fully clarify the reviewers' concerns during the discussion phase, and AC also agrees that they should be resolved to meet the high standard of ICLR. Hence, AC recommend rejection.\\n\\nHere is additional thought from AC. The authors propose ours-c and ours-cs. The latter is reported to outperform the former in terms of FLOPs, but AC thinks the former may have merits in other more important performance metrics, e.g., the actual latency and/or memory consumption on a target device. More discussions and results for this would strengthen the paper.\"}",
"{\"title\": \"Response to Reviewer 5 - (2)\", \"comment\": \"**Q3** : Other recent pruning methods, such as AutoSlim, TAS and MetaPruning can reach a strict constraint for FLOPs.\\n\\n**A** : \\nThe reviewer is correct that AutoSlim, TAS, and MetaPruning can also reach the resource constraints strictly. However, these methods are much less efficient compared to our method: 1) AutoSlim needs to train multiple slimming networks. 2) TAS requires extensive evaluations on a large number of different architectures for the neural architecture search. Concretely, it takes 59 hours on 4 GPUs (Tesla V-100) for TAS to prune a ResNet-18 network on ImageNet, while our method can prune a ResNet-50 network on ImageNet in 3 hours only using 10 CPU (Xeon E5-2650) cores without any GPUs (see supplementary material B). 3) MetaPruning trains a PruningNet, which is at least 30 times larger than the original network in terms of network parameters, and then searches a well-performing pruned network via evolutionary methods.\\n\\nMeanwhile, our method is along the line of fixed-importance pruning methods and does not require exhaustive network training like the methods above. Also, our approach can meet the resource constraints tightly with only one round of pruning and finetuning. Other fixed-importance methods need multiple rounds of pruning and finetuning to achieve the same goal.\\n\\nRecently, another paper suggested a budget-aware regularizer to strictly satisfy the target resource constraints ([6]). Our method differs from [6] in that our method deals with target resource constraints in the \\u2018fixed-importance\\u2019 pruning framework, while the work of [6] lies in the \\u2018trainable-importance\\u2019 pruning framework, which is more computationally expensive.\\n\\n**Q4**: Pruning on recent compact networks is favoured, such as MobileNetV2, which is also a routine network for many pruning papers.\\n\\n**A** : Thank you for the suggestion. We provide the MobileNetV2 experiment results in the table below (please see the revised supplementary material G as well). \\u2018(tuned)\\u2019 indicates that the normalizing factor ($\\\\gamma_{l}$ in the main paper) is tuned with grid search. A fixed value is used otherwise. Our method shows performance competitive to other recent pruning methods, [7], [8], and MetaPruning. However, we note that our method is much more efficient than those methods since [7] requires repetitive finetuning steps on the proposed networks, [8] also requires iterative trial and error steps to train the RL agent, and MetaPruning trains PruningNet, which is at least $30$ times bigger than the original model.\\n\\n|**Network**|**Method**|**Top 1 Pruned Acc$\\\\uparrow$**|**Top1 Acc drop$\\\\uparrow$**|**FLOPs (%)$\\\\downarrow$**|\\n|:---:|:--:|:--:|:--:|:--:|\\n| MobileNetV2 | [7] | 70.9 | 0.9 | 70 |\\n| | [8] | 70.8 | 1.0 | 70 |\\n| | MetaPruning | **71.2** | **0.6** | 69 |\\n||_________________________________|_________________________________|_________________________________|_________________________________|\\n| | ours-c | 70.8 \\t | 1.0 | **67** |\\n|\\t | ours-cs | 70.2 | 1.6 | **67** |\\n| | ours-c (tuned) | 71.0 | 0.8 | **67** |\\n| | ours-cs (tuned) | 70.9 | 0.9 | **67** |\\n\\n**Table** Top1 pruned accuracy and accuracy drop from the baseline network at given FLOPs on MobileNetV2 architecture at ImageNet.\\n\\n**References**\\n\\n[1] Gate decorator: Global filter pruning method for accelerating deep convolutional neural networks, NeurIPS19. \\n\\n[2] Lookahead: A Far-sighted Alternative of magnitude-based pruning, ICLR 2020.\\n\\n[3] Importance Estimation for Neural Network Pruning, CVPR 2019.\\n\\n[4] Collaborative Channel Pruning for Deep Networks, ICML19.\\n\\n[5] DropNet: Reducing Neural Network Complexity via Iterative Pruning, ICML20.\\n\\n[6] ChipNet: Budget-Aware Pruning with heaviside continuous approximations, under review, ICLR 2021 \\n\\n[7] NetAdapt: Platform-Aware Neural Network Adaptation for Mobile Applications, ECCV18.\\n\\n[8] AMC: automl for model compression, ECCV18.\"}",
"{\"title\": \"Response to Reviewer 5 - (1)\", \"comment\": \"We thank reviewer 5 for the encouraging comments (\\u201cWriting is good\\u201d, \\u201cthe technical details seem sound and clear\\u201d, \\u201cmotivation makes sense\\u201d) and constructive feedback.\\n\\n**Q1** : The novelty is limited. The formulation of the 0-1 optimization for pruning is simple and intuitive. \\n\\n**A** : \\nTo the best of our knowledge, optimally pruning network channels and shape columns modeling the quadratic coupling between neighboring layers in a principled optimization framework (QCQP) has never been explored before. Our formulation also theoretically certifies that any inactive weights do not exist in our network during the pruning procedure, fundamentally guaranteeing the exact computation of the true objective and target resources such as FLOPs and network size (see Proposition 1 and 2). In contrast, previous fixed-importance pruning methods cannot safely remove inactive weights during the pruning process and resort to post-hoc heuristics to remove those inactive weights **after the pruning procedure**.\\n\\nFurthermore, our approach can naturally handle nonsequential connections, such as skip additions and skip concatenations, more flexibly. For example, assume the output feature map of a layer (A) and a skip-connected feature map (B) add up to be the input feature map of the next layer (C). Previous pruning method [1] handles the skip connection by simply grouping their channels - each $i$-th channels of A, B, and C are pruned as one. We note that this heuristic is to simplify the pruning procedure, but this severely limits the feasible set of pruned networks that can be discovered. On the other hand, in our optimization framework, pruning the $i$-th channel of one feature map does not necessarily lead to pruning the others\\u2019 $i$-th channels. Instead, our optimization framework provides the minimum set of rules a pruning algorithm should follow, which results in more flexibility during the pruning procedure (see supplementary material A.1).\\n\\nWe are unaware of other papers with similar contributions, and it would be helpful if the reviewer could point us to such works.\\n\\n**Q2**: Magnitude-based pruning is already challenged for it is not accurate to indicate the selection.\\n\\n**A** : \\nTo clarify, we used the term \\u2018magnitude-based pruning\\u2019 to refer to pruning methods based on the importance of neurons or channels. The importance tensor used can be freely replaced with other measures of importance. We believe the proposed method can be used as an analysis tool to compare the performance of several magnitude-based channel pruning methods in the long run.\\n\\nThat said, we note that magnitude-based pruning is an active area of research within efficient network inference. To name a few, [2] provides a method for magnitude-based pruning by considering the effect of neighboring neurons. [3] studies several possible metrics for measuring the importance of neurons. [4] provides a pruning method based on the collaborative importance of neighboring channels in the same layers. [5] iteratively prunes the network according to the average activation of channels.\\n\\nFurthermore, we explain the efficiency of magnitude-based pruning methods over other pruning methods (AutoSlim, TAS, MetaPruning) in the next question.\"}",
"{\"title\": \"Dear Reviewers\", \"comment\": \"Thank you for your time and the effort spent providing thoughtful feedback. We appreciate the encouraging comments [R1] \\u201cGood submission focusing on a valuable topic\\u201d, \\u201cvaluable both in theory and applications\\u201d. [R2] \\u201cThe motivation is clear\\u201d and \\u201cThe idea of mitigating the effect of inactive weights is interesting\\u201d. [R3] \\u201cwell-written and well-motivated\\u201d, \\u201csimple to use\\u201d, and \\u201cmay inspire following works in this area\\u201d. [R5] \\u201cWriting is good, and the technical details seem sound and clear\\u201d, \\u201cThe motivation makes sense\\u201d.\\n\\nWe address your concerns in the individual replies and update our submission.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"We thank reviewer 2 for encouraging comments (\\\"motivation is clear\\\", \\\"idea of mitigating the effect of inactive weights is interesting\\\") and constructive feedback.\\n\\n**Q1**: I don't quite get how the proposed approach is able to prune the last channel of the 2nd layer. It would be nice to discuss this when the optimization is introduced?\\n\\n**A** : Thank you for pointing this out. In our method, we first find the optimal channel activation \\u2018r\\u2019 and then compute the pruning mask \\u2018A\\u2019 to prune filters. This process directly optimizes our objective and finds the global optimum, avoiding the possible local-optimum solutions from the greedy approach. For example, in figure2, we first find the optimal channel activation $r^{(l-1)}, r^{(l)} ,r^{(l+1)}$ which maximizes the sum of importance of remaining filters. Concretely, $r^{(l-1)}=[1,1,1]^\\\\intercal$, $r^{(l)}=[0,1]^\\\\intercal$, and $r^{(l+1)}=[1,1,0]^\\\\intercal$. $A^{(l)} = r^{(l-1)}{r^{(l)}}^\\\\intercal \\\\otimes J_{K_l}$ lead to \\n$A^{(l)} = \\\\begin{bmatrix} 0 & 1 \\\\\\\\\\\\ 0 & 1 \\\\\\\\\\\\ 0 & 1\\\\end{bmatrix} \\\\otimes J_{K_l}$ and $A^{(l+1)} = \\\\begin{bmatrix} 0 & 0 & 0\\\\\\\\\\\\ 1 & 1 & 0\\\\end{bmatrix} \\\\otimes J_{K_{l+1}}$. We clarified these details in section 3.1.\\n\\n**Q2**: Experiments comparing these two methods (QCQP and greedy approach).\\n\\n**A** : \\nThank you for the suggestion. For apple-to-apple comparison, we compare QCQP (Ours) to the greedy approach (Greedy) in ResNet-20. Note that Greedy prunes the channels starting from the first layer while removing inactive weights. Also, in Greedy, we prune the channels with a uniform ratio in each layer and adopt a common heuristic to ignore the skip addition. We compared the objective value of Ours and Greedy under several FLOPs constraints (20%, 40%, and 60% of original FLOPs), and present the results below. Ours finds a better optimum compared to Greedy under all FLOPs constraints.\\n\\n| FLOPs (%)$\\\\downarrow$ | Greedy | Ours |\\n|:--:|:--:|:--:|\\n|20 | 138.3 | **196.5** |\\n| 40 | 261.8 | **361.9** |\\n| 60 | 362.0 | **473.2** |\\n\\n**Q3** : Missing references.\\n\\n**A** : Thank you for your comment. We added the three references to the related works in the revised version.\"}",
"{\"title\": \"Response to Reviewer3\", \"comment\": \"We thank the reviewer3 for encouraging comments (\\\"well-written and well-motivated\\\", \\\"simple to use\\\", \\\"may inspire following works in this area\\\") and constructive feedback.\\n\\n**Q1** : Results should be evaluated from more aspects\\n\\n**A** : First, we evaluated the pruned ResNet-20 model's actual latency on a machine with 1 GPU (TITAN-XP) using PyTorch. Our pruned model lowers the FLOPs to 46% compared to the original ResNet-20 model. On batch size 512, the inference time of our pruned model and the original model is '18.26ms' and '27.57ms', respectively. This shows our pruned model is 1.51 times faster than the original ResNet-20 model in our environment. \\n\\nNext, we measure the actual network size during the inference. Our pruned model's network size is 46% compared to the original ResNet-20 model. The network size of the pruned model and the original model is '0.12 MB' and '0.27 MB', respectively.\\n\\nTo sum up, we find that meeting the target FLOPs constraint has increased the inference speed of the pruned network, and smaller network size has also reduced the actual memory consumption.\\n\\n**Q2**: The proposed is not consistently better than other methods. For those inferior results, some analysis should be provided since the results violate the motivation.\\n\\n**A** : As we mentioned in the conclusion, our method consistently outperforms other 'fixed-importance' pruning methods. GBN, which outperforms our results, is one of the \\u2018trainable-importance\\u2019 pruning methods. Fixed-importance pruning methods are much more efficient in computational cost and memory usage as trainable-importance pruning methods require training from the whole network, while fixed-importance methods need only to train from a small pruned network, as mentioned in section 4.\\n\\nFor example, we can compare our method with [1], one of the trainable-importance methods, using ResNet architecture on CIFAR-10 dataset. [1] trains the entire network with a sparsity regularizer for 160 epochs, prunes, and finetunes for another 160 epochs. Meanwhile, our method only requires pruning and finetuning on a smaller network for 200 epochs.\\n\\nFor further analysis, we compared our method to [2], which is not a fixed-importance method nor a trainable-importance method, using MobileNetV2 architecture on ImageNet. [2] requires to train a PruningNet, which generates the weights of the pruned network, for 64 epochs. However, the number of parameters of a PruningNet is at least $30$ times larger than the original network. Furthermore, [2] searches for a well-performing pruned network via an evolutionary procedure, and this search step requires about 1000 evaluations on the test dataset. The whole process before the finetuning step takes about 2 days on a machine with 4 GPUs (RTX-2080 Ti) using PyTorch while our method only takes 2 hours using 10 CPU (Xeon(R) Silver) cores without any GPUs.\\n\\nFinally, we note that our method has the potential to perform even better by adjusting the normalizing factors. Our experiments on Mobilenet-V2 in supplementary material G show that tuning the normalizing factors can improve the pruning performance. However, we did not conduct an extensive hyperparameter search on this normalizing factor in the main experiments.\\n\\n**References**\\n\\n[1] Learning efficient convolutional networks through network slimming, ICCV 17.\\n\\n[2] MetaPruning: Meta Learning for Automatic Neural Network Channel Pruning, ICCV 19.\\n\\n[3] NetAdapt: Platform-Aware Neural Network Adaptation for Mobile Applications, ECCV18.\"}",
"{\"title\": \"Response to Reviewer1\", \"comment\": \"We thank reviewer 1 for the encouraging comments (\\\"Good submission focusing on a valuable topic\\\", \\\"valuable both in theory and applications\\\") and constructive feedback.\\n\\n**Q1** : Could this proposed strategy be applied to reduce the parameters of the CNN models to the level of MobileNetV2? \\n\\n**A** : Thank you for the suggestion. We included our MobileNetV2 experiment results in the table below (see supplementary material G). '(tuned)' indicates that the normalizing factor ($\\\\gamma_{l}$ in the main paper) is tuned with grid search. A fixed value is used otherwise. Our method shows performance competitive to other pruning methods, [1], [2] and MetaPruning. However, we note that our method is much more efficient than those methods since [1] requires repetitive finetuning steps on the proposed networks, [2] also requires iterative trial and error steps to train the RL agent, and MetaPruning trains PruningNet, which is at least $30$ times bigger than the original model.\\n\\n|**Network**|**Method**|**Top 1 Pruned Acc$\\\\uparrow$**|**Top1 Acc drop$\\\\uparrow$**|**FLOPs (%)$\\\\downarrow$**|\\n|:---:|:--:|:--:|:--:|:--:|\\n| MobileNetV2 | [1] | 70.9 | 0.9 | 70 |\\n| | [2] | 70.8 | 1.0 | 70 |\\n| | MetaPruning | **71.2** | **0.6** | 69 |\\n||_________________________________|_________________________________|_________________________________|_________________________________|\\n| | ours-c | 70.8 \\t | 1.0 | **67** |\\n|\\t | ours-cs | 70.2 | 1.6 | **67** |\\n| | ours-c (tuned) | 71.0 | 0.8 | **67** |\\n| | ours-cs (tuned) | 70.9 | 0.9 | **67** |\\n\\n\\n**Table** Top1 pruned accuracy and accuracy drop from the baseline network at given FLOPs on MobileNetV2 architecture at ImageNet.\\n\\n\\n\\n**Q2** : Authors determined that the proposed method is useful to tackle several classification tasks, did this method also perform well on the CNN models aimed at segmentation, detection et al. Performance of this kind of models may decrease more than the classification.\\n\\n**A**: Thank you for the suggestion. We applied our pruning method to FCN-32s for segmentation tasks on the PASCAL VOC 2011 dataset. We evaluated the segmentation performance with a widely-used measure, mean Intersection over Union (mIoU), and pruned an original network which has $62.58$ mIoU (see supplementary material I for more details). Our experiment results are shown in the table below. Our method reduces the FLOPs by $27$ \\\\%, with $0.15$ \\\\% mIoU drop on \\u2018ours-c' and $0.09$ \\\\% mIoU drop on \\u2018ours-cs'. We will release the pruned FCN-32s model as well as the source code.\\n\\n| **Network** | **FLOPs (%)** $\\\\downarrow$ | | **mIoU (%)** $\\\\uparrow$ | |\\n|:--:|:--:|:--:|:--:|:--:|\\n| | | **ours-c** | **ours-cs** | **Original** |\\n| FCN-32s | 20 | 49.89 | 49.88 | - |\\n|\\t | 30 | 54.20 | 54.68 | - |\\n| | 40 | 56.10 | 57.24 | - |\\n| | 50 | 58.83 | 58.95 | - |\\n| | 60 | 59.54 | 60.55 | - |\\n| | 70 | 61.65 | 61.88 | - |\\n| | 73 | 62.43 | 62.49 | - |\\n| | 100 | - | - | 62.58 | \\n\\n\\n**Table** mIoU (%) of 'ours-c' and 'ours-cs' at different FLOPs on FCN-32s at PascalVOC2016.\\n\\n**Reference**\\n\\n[1] NetAdapt: Platform-Aware Neural Network Adaptation for Mobile Applications, ECCV18.\\n\\n[2] AMC: automl for model compression, ECCV18.\"}",
"{\"title\": \"Writing is good but with limited novelty.\", \"review\": \"This paper mainly improves the idea of \\\"PRUNING FILTERS FOR EFFICIENT CONVNETS\\\" by encouraging the pruning with a {0-1} optimization instead of a greedy manner. Experiments validate the effectiveness of the proposed method.\", \"pros\": [\"Writing is good, and the technical details seem sound and clear.\", \"The motivation makes sense.\"], \"cons\": \"- The novelty is limited. The formulation of the 0-1 optimization for pruning is simple and intuitive. Concretely, it leverages the pre-trained weights for the unpruned network and tries to select the kernels with the maximum magnitude. For me, I am not sure whether the novelty is up to the standard of ICLR venue. \\n- The objective is to maximize the norm of selected filters. However, magnitude-based pruning is already challenged for it is not accurate to indicate the selection. \\n- Authors claim that current pruning papers can not reach a strict constraint for FLOPs during pruning. However, it is not true for recent pruning methods, such as AutoSlim, TAS and MetaPruning. Necessary discussions are needed. \\n- Pruning on recent compact networks is favoured, such as MobileNetV2, which is also a routine network for many pruning papers. \\n\\n[ICLR2017] PRUNING FILTERS FOR EFFICIENT CONVNETS \\n[2019] AutoSlim: Towards One-Shot Architecture Search for Channel Numbers\\n[ICCV2019] MetaPruning- Meta Learning for Automatic Neural Network Channel Pruning.pdf\\n[NIPS2019] Network Pruning via Transformable Architecture Search\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Review\", \"review\": \"This paper introduces an optimization method for pruning channels in networks. The authors first motivated the proposed approach by showing that current pruning methods will result in \\\"inactive weights\\\" for the following layer. Then the authors introduce a QCQP optimization method that can constrain the exact amout of resources during the optimization process. Extensive experiments are conducted on different benchmarks with different backbones. And the authors also performed spatial pruning to further reduce resource usage.\\n\\n\\n####### Strengths######\\n+ The motivation is clear and the presentation is generally good.\\n+ The idea of mitigating the effect of inactive weights is interesting.\\n+ Extensive studies have been conducted in terms as different datasets/backbones.\\n\\n####### Weakness######\\n- The term \\\"the inherent quadratic coupling\\\" used in the abstract is a bit confusing without any explannations.\\n- I didn't quite follow section 2.2, where the authors discussed the quadratic coupling effect. In figure 2, I understand the prunned channels for the greedy part. But I don't quite get how the proposed approach is able to prune the last channel of the 2nd layer. It would be nice to discuss this when the optimization is introduced?\\n- Following the previous point, the authors basically are saying QCQP is better than greedy pruning. I couldn't find experimentd comparing these two methods. I mean I understand previous methods are greey based. But it would be nice to have the same implementation by the authors for apple-to-apple comparisons.\\n- Missing references\\n[1]Rethinking the Value of Network Pruning\\n[2] Channel Gating Neural Networks\\n[3] Dynamic Channel Pruning: Feature Boosting and Suppression\\n\\n\\n##############Post Rebuttal###############\\n\\nMy concerns are addressed by the authors. I'm keeping my original rating.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Good motivation and more metrics should be considered\", \"review\": \"Summary\\uff1a\\nIn this manuscript, a new pruning method is proposed by considering the inherent quadratic constraint between consecutive layers. Without this constraint, inactive weights cannot be safely removed. Even with the same objective function, the optimized result is different, as shown in the motivation section. Based on this observation, the pruning task is models as a QCQP optimization problem. And a faster algorithm to solve this problem is proposed. Moreover, the pruning on filter size can also be modeled as the QCQP problem, making the pruning on both channel and filter size feasible.\", \"strengths\": [\"The paper is well-written and well-motivated. The motivation is reasonable and the proposed method does alleviate the overlooked issue.\", \"The results on the CIFAR10 and ImageNet surpass some previous methods.\", \"The proposed method does not need iterative pruning procedure as other methods, making it simple to use.\", \"The proposed motivation may inspire following works in this area.\"], \"weaknesses\": [\"I am not an expert in the area of pruning. I think this motivation is quite good but the results seem to be less impressive. Moreover, I believe the results should be evaluated from more aspects, e.g., the actual latency on target device, the memory consumption during the inference time and the actual network size.\", \"The performance is only compared with few methods. And the proposed is not consistently better than other methods. For those inferior results, some analysis should be provided since the results violate the motivation.\", \"I am willing to change my rating according to the feedback from authors and the comments from other reviewers.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Good submission focusing on a valuable topic\", \"review\": \"The authors proposed a pruning method that aims to reduce the parameters and heavy computational cost of large convolutional neural networks (CNNs). According to the comparison experiments performed on several widely-used network fashions, the proposed strategy could help to efficiently reduce the numbers of parameters while maintaining less performance decreasing. I think this study is valuable in both theory and applications.\\nHowever, several issues may be further emphasized to make the submission improved:\\n\\n(1) Were the CNN models deployed in the real resource-constrained environment such as automatic drive hardware with less computational capability? Or could this proposed strategy be applied to reduce the parameters of the CNN models to the level of MobileNet?\\n\\n(2) Authors determined that the proposed method is useful to tackle several classification tasks, did this method also perform well on the CNN models aimed at segmentation, detection et al. Performance of this kind of models may decrease more than the classification.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.