forum_id
stringlengths
9
20
forum_title
stringlengths
3
179
forum_authors
sequencelengths
0
82
forum_abstract
stringlengths
1
3.52k
forum_keywords
sequencelengths
1
29
forum_decision
stringclasses
22 values
forum_pdf_url
stringlengths
39
50
forum_url
stringlengths
41
52
venue
stringclasses
46 values
year
stringdate
2013-01-01 00:00:00
2025-01-01 00:00:00
reviews
sequence
rJlRKjActQ
Manifold Mixup: Learning Better Representations by Interpolating Hidden States
[ "Vikas Verma", "Alex Lamb", "Christopher Beckham", "Amir Najafi", "Aaron Courville", "Ioannis Mitliagkas", "Yoshua Bengio" ]
Deep networks often perform well on the data distribution on which they are trained, yet give incorrect (and often very confident) answers when evaluated on points from off of the training distribution. This is exemplified by the adversarial examples phenomenon but can also be seen in terms of model generalization and domain shift. Ideally, a model would assign lower confidence to points unlike those from the training distribution. We propose a regularizer which addresses this issue by training with interpolated hidden states and encouraging the classifier to be less confident at these points. Because the hidden states are learned, this has an important effect of encouraging the hidden states for a class to be concentrated in such a way so that interpolations within the same class or between two different classes do not intersect with the real data points from other classes. This has a major advantage in that it avoids the underfitting which can result from interpolating in the input space. We prove that the exact condition for this problem of underfitting to be avoided by Manifold Mixup is that the dimensionality of the hidden states exceeds the number of classes, which is often the case in practice. Additionally, this concentration can be seen as making the features in earlier layers more discriminative. We show that despite requiring no significant additional computation, Manifold Mixup achieves large improvements over strong baselines in supervised learning, robustness to single-step adversarial attacks, semi-supervised learning, and Negative Log-Likelihood on held out samples.
[ "Regularizer", "Supervised Learning", "Semi-supervised Learning", "Better representation learning", "Deep Neural Networks." ]
https://openreview.net/pdf?id=rJlRKjActQ
https://openreview.net/forum?id=rJlRKjActQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "rke_iSHHlE", "SyexuodbeN", "r1lPNB0RJN", "SkxG9-lf14", "BkeJVJ6Zk4", "BJlMBdjZy4", "B1g7zHf-y4", "Skgiansey4", "S1ldh9QeJN", "rJx6E57xk4", "SJg3Fs01y4", "SyxsI4RyJE", "SyxHCKQA0m", "H1eto0C6Cm", "HkljggY207", "H1gr9yt2A7", "SylMhAdnRQ", "Bylw_ZnKRQ", "S1etENZK0m", "SygudipOAm", "r1xtQt6_AX", "SJxvtyIPCX", "B1lmEyIwCQ", "B1xatAFN0m", "Bkg5lrwV0X", "HylpPZyNRQ", "rJlZwlog0Q", "BkgpCYcuTQ", "Hkl5Tu5d6Q", "HJl6ia7867", "HylDFCwHp7", "rJekpcvSa7", "S1xQL8DBTX", "SJg-oP7epQ", "SJeIIMu9hX", "Hkegs4KP2m", "Bye9qJi4hQ" ], "note_type": [ "meta_review", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_comment", "comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1545061791781, 1544813416334, 1544639791147, 1543795082423, 1543782183356, 1543776313645, 1543738635312, 1543711938615, 1543678639991, 1543678516796, 1543658371599, 1543656531416, 1543547341148, 1543528096697, 1543438322552, 1543438221474, 1543437994277, 1543254383132, 1543210033169, 1543195504481, 1543194912970, 1543098238800, 1543098155505, 1542917765103, 1542907122190, 1542873445213, 1542660184764, 1542134229199, 1542133954407, 1541975461077, 1541926526886, 1541925558653, 1541924426541, 1541580697222, 1541206606011, 1541014679662, 1540824978161 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper491/Area_Chair1" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper491/Authors" ], [ "ICLR.cc/2019/Conference/Paper491/Authors" ], [ "ICLR.cc/2019/Conference/Paper491/Authors" ], [ "ICLR.cc/2019/Conference/Paper491/Authors" ], [ "ICLR.cc/2019/Conference/Paper491/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper491/Authors" ], [ "ICLR.cc/2019/Conference/Paper491/Authors" ], [ "ICLR.cc/2019/Conference/Paper491/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper491/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper491/Authors" ], [ "ICLR.cc/2019/Conference/Paper491/Authors" ], [ "ICLR.cc/2019/Conference/Paper491/Authors" ], [ "ICLR.cc/2019/Conference/Paper491/Authors" ], [ "ICLR.cc/2019/Conference/Paper491/Authors" ], [ "ICLR.cc/2019/Conference/Paper491/Authors" ], [ "ICLR.cc/2019/Conference/Paper491/Authors" ], [ "ICLR.cc/2019/Conference/Paper491/Authors" ], [ "ICLR.cc/2019/Conference/Paper491/Authors" ], [ "ICLR.cc/2019/Conference/Paper491/Authors" ], [ "ICLR.cc/2019/Conference/Paper491/Authors" ], [ "ICLR.cc/2019/Conference/Paper491/AnonReviewer2" ], [ "~Yongyi_Mao1" ], [ "ICLR.cc/2019/Conference/Paper491/Authors" ], [ "ICLR.cc/2019/Conference/Paper491/Authors" ], [ "ICLR.cc/2019/Conference/Paper491/Authors" ], [ "ICLR.cc/2019/Conference/Paper491/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper491/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper491/Authors" ], [ "ICLR.cc/2019/Conference/Paper491/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper491/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper491/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"The paper contains useful information and shows relative improvements compared to mixup. However, some of the main claims are not substantiated enough to be fully convincing. For example, the claims that manifold mixup can prevent can manifold collision issue where the interpolation between two samples collides with a sample from other class is incorrect. The authors are encouraged to incorporate remarks of the reviewers.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Needs improvement.\"}", "{\"comment\": \"The baseline of ResNet-50 on ImageNet is lower than that current reproduced one, ~ 76.5% top-1. Eg. mixup: Beyond Empirical Risk Minimization. It is OK that the baseline is a bit lower. However, it would be convincible (for me) if the proposed approach could reach or exceed 76.5% + 0.1% (std).\", \"title\": \"The baseline of ResNet-50 on ImageNet\"}", "{\"title\": \"Feedback on Rebuttal\", \"comment\": \"Hello,\\n\\nThank you for your time in reviewing and we appreciate your time in discussing with us. I want to summarize so far: \\n\\n1. For your original major concerns #1/#2, we added a new appendix explaining how Manifold Mixup changes the learned representations intuitively as well as a new spectral analysis of the representations which shows how this change happens empirically, even when mixing follows the first hidden layer. \\n\\nWe also acknowledge that we don't want to overclaim what manifold mixup does here as it still uses interpolations in the input layer and early hidden layers (which could but would not necessarily lead to underfitting on some datasets). However our results show that it does help empirically to use it in almost any combination of layers, and we have strong evidence that this largely results from manifold mixup changing the learned representations. We revised the abstract to make this clear. \\n\\n2. We performed new experiments which exactly address major concerns #3 and #4, namely sensitivity to alpha and results on SVHN. \\n\\n3. We addressed the \\\"minor remarks\\\" by adding a more thorough discussion of Adamix and adding results from that paper into our tables. We have also made a case for why it is distinct from and likely to be complementary to manifold mixup. \\n\\nWhile I agree that there is more that could be learned about the method, we have made a substantial effort to address all of your major concerns. At least half of your major concerns (#3/#4), those related to new experiments, have been perfectly addressed. \\n\\nIs there anything else that we could do that would address any remaining concerns that you have? This is very important to us and we really appreciate your feedback so far.\"}", "{\"title\": \"Thanks for feedback\", \"comment\": \"Hello,\\n\\nThank you for taking the time to review our paper. Through the course of the rebuttal, we have conducted some new experiments which address some of the points that you've raised, and we have also produced some arguments in favor of the novelty of the work. I think that the spectral analysis of the representations (Appendix I) is especially significant as it shows a significant flattening effect from the use of Manifold Mixup, and no consistent effect from Input Mixup, which is strong evidence in favor of Manifold Mixup working through a novel mechanism.\"}", "{\"title\": \"Thanks for Feedback and Reviewing\", \"comment\": \"We appreciate all of your help in reviewing the paper, and we think that several of the points that you've raised have helped us to make the paper better (such as the new appendix on how the method works and the change of the abstract, though this came after the final deadline for updating the paper).\\n\\nI believe that the deadline for authors to post comments is closing soon, but if you could take a look at our most recent response, as well as the new experiments for the rebuttal we'd really appreciate it. \\n\\nOnce again, thanks for all of your help.\"}", "{\"title\": \"Thanks\", \"comment\": \"Hello,\\n\\nWe definitely agree that we should have cited this and we sincerely apologize for not doing so. We will do so immediately when we get a chance to update the paper. The (Wang et. al 2018) method is interesting and shares some important similarities and differences with Manifold Mixup. I'll discuss them below for the benefit of readers: \\n\\n-A really big difference is that Manifold Mixup interpolates in multiple layers, and in practice we never did it directly at the output layer. On the other hand the WNLL method (Wang et. al 2018) exclusively operates in the output layer. \\n\\n-If I understand it correctly, WNLL frames their algorithm with an Euler-Lagrange equation (Equation 2) where the cost is based on unlabeled data (and the pseuolabels at those points) and the labeled data provide constraints. In some ways this is related to how we do semi-supervised learning in Manifold Mixup by using a pseudolabel loss on interpolated points between unlabeled data and a normal supervised loss on the labeled data. However in Manifold Mixup we just add these two losses together and use a weighting between them (closely following other work with consistency losses like VAT (Miyato 2017)). \\n\\n-Manifold Mixup uses simple backpropagation through the interpolation, whereas WNLL has a more complicated training procedure. Critically, WNLL avoids backpropagating through the interpolation procedure, whereas Manifold Mixup presents evidence that backpropagating through the interpolation is essential to why the method works. Additionally the two methods, as far as I can tell, have rather distinct motivations and especially distinct theory. \\n\\n-The regularization effect (when comparing same architecture) is larger when using Manifold Mixup when we compare the same architectures and using the full dataset (for example on CIFAR-10/PreActResNet18 Manifold Mixup is 2.89% error and WNLL is 4.74%, with baseline of 6.21%). However, it is still quite impressive that WNLL (Wang et. al 2018) achieves such significant gains while only operating in the output layer. Also Manifold Mixup doesn't have the same experiments with very small amounts of labeled data, and WNLL shows strong results in these cases.\"}", "{\"title\": \"Feedback on Response\", \"comment\": \"Hello,\\n\\nDo you have any additional feedback on our response? We greatly appreciate the time that you've given to discussing the paper with us so far and providing feedback.\"}", "{\"comment\": \"https://arxiv.org/pdf/1802.00168.pdf\", \"title\": \"See this one\"}", "{\"title\": \"Thanks for Continuing Discussion / Feedback (2)\", \"comment\": \"\\\"I really appreciate the authors\\u2019 rebuttal. However, I am not convinced that the proposed Manifold Mixup scheme in its current form can avoid sample collision, for the following reasons.\\u201d\\n\\nWe agree with this, especially because we still do mix in the input layers. \\n\\n\\u201cMy understanding is that in order for the proposed method to work as claimed, the network layers below the layer chosen for mixing samples needs to be powerful enough to drive the training error close to zero. ... \\u201ca randomly chosen layer per minibatch to perform mixup\\u201d as implemented by the current Manifold Mixup approach, will not be able to avoid sample collision.\\u201c\\n\\nManifold Mixup consistently improves test accuracy over baseline for any combination of layers to mix in (see our response to R1). And in fact substantial regularization is achieved even when we mix after the 3rd resblock, which is near the end of the network. At the same time it also helps to mix in earlier layers. \\n\\nI think that our goal and procedure is very different from adamix. And for what it\\u2019s worth, the ideas seem very complementary to us. Perhaps one could train with Manifold Mixup, but when mixing in the input layer (and also the 1st hidden layer) one could use AdaMix on those updates. When mixing in the deeper layers, we would have the desirable effect of Manifold Mixup: learning flatter representations and broader regions in the hidden space with lower confidence classifications - and the use of Adamix would let us use larger alphas when we do mix in the input layer. \\n\\nThus we don\\u2019t see any conflict between the ideas. Our paper is about mixing in the hidden layers, and about how the dynamics of mixing in the hidden layers effects what they learn. This is related to reducing inconsistent interpolations, but it is primarily a \\u201cmeans to an end\\u201d of changing how the hidden layers represent the data. \\n\\n\\u201cIn addition, a ResNet as used by the paper will not be able to attain this collision avoidance goal; a network with tailored layers (with sufficient modeling capability) below the mixup layer is needed. In this sense, unlike Dropout or Batch Normalization, Manifold Mixup is NOT a plug-and-play regularization scheme.\\u201d\\n\\nWe understand your point here, but if you look at it from the perspective of test accuracy on real datasets, it does seem to be \\\"plug and play\\\". For example in our CIFAR-10 analysis of the mixing layer given in the response to R1 (Appendix K), you can see that every mixing combination improves over baseline, and most configurations improve over input mixup. \\n\\n\\u201cThe current form of Manifold Mixup seems to have two conflict objectives to me. On one hand, it requires the mixing layer to be closer to the output layer in order to reduce the collision issue to generate informative representations. On the other hand, mixing close to the output layer will have the negative impact on regularization. A further study on this tradeoff would be very beneficial.\\\"\\n\\nOur strongest theoretical guarantee only holds for deeper layers, but intuitively even for early layers the effect of representations being flattened can still occur, and there\\u2019s strong evidence that it still has a positive effect on regularization. Also our analysis in appendix I shows that this flattening happens empirically even following a single hidden layer. \\n\\nAt a high level, we strongly agree that we need to soften and reduce some of our claims for what Manifold Mixup accomplishes (specifically we don\\u2019t want to claim that our method completely avoids collision). At the same time we still think that our contribution is significant and well supported - which is that Manifold Mixup changes how deep networks represent information by flattening the class-specific representations and assigning low confidence classifications to more of the space.\"}", "{\"title\": \"Thanks for continuing discussion / feedback (1)\", \"comment\": \"Thanks for writing back - we really appreciate the feedback as well as the effort that goes into continuing the discussion. It means a lot to us. We also think that we now have a pretty clear understanding of what your objection is.\\n\\nWe agree with the technical claims in your response. However, reducing collision is NOT the main point or contribution of the paper. We believe that our main contribution is that manifold mixup dramatically changes the representations learned by deep networks: the distribution of the representations of the examples of each class becomes flattened and concentrated, and more of the hidden space corresponds to less confident classifications. We have direct empirical evidence for this happening, even in the early layers of the network (spectral analysis in appendix I on MNIST). Note that this is very different from both Mixup and Adamix, as both of those methods still only interpolate in the input space - and our hidden representations look completely different from Input Mixup or other well known regularizers. \\n\\nThanks for helping us make this a better paper by tightening the actual claim of the paper, which is not of perfectly preventing inconsistent interpolations (collisions) , but rather of changing the distribution of representations in a way that is useful for classification, making Manifold Mixup a significant piece of research. We will revise the abstract accordingly and move the emphasis onto what we think is more important. Here is the proposed and more modest new abstract, highlighting our contributions more clearly and accurately: \\n\\n\\u201cDeep networks often perform well on the data distribution on which they are trained, yet give incorrect (and often very confident) answers when evaluated on points from off of the training distribution. This is exemplified by the adversarial examples phenomenon but can also be seen in terms of model generalization and domain shift. Ideally, a model would assign lower confidence to points unlike those from the training distribution. We propose a regularizer which addresses this issue by training with interpolated hidden states and encouraging the classifier to be less confident at these points. Because the hidden states are learned, this has an important effect of encouraging the hidden states for a class to be concentrated and flattened with more of the volume of the hidden space mapping to lower confidence classifications. \\nThis concentration of the class-specific representations can be seen as making the features in earlier layers more discriminative. We prove some exact conditions on how Manifold Mixup changes the representations for a sufficiently deep layer: specifically that there is a flattening effect related to the number of classes and the number of hidden units. We back up this theoretical analysis of the ideal case by conducting an empirical spectral analysis of the learned representations, showing that this flattening occurs even when we mix immediately following the first hidden layer. We show that despite requiring no significant additional computation, Manifold Mixup achieves large improvements over strong baselines in supervised learning, robustness to single-step adversarial attacks, semi-supervised learning, and Negative Log-Likelihood on held out samples.\\u201d\\n\\nThere are a few other points where the writing would need to be changed to be consistent with that, for example the caption of figure 1. We will also change all claims of \\u201cavoiding\\u201d or \\u201cremoving\\u201d inconsistent interpolation to the more modest and strongly supported claim that inconsistent interpolations are reduced as part of how the hidden layers representations are changed.\"}", "{\"title\": \"The link you sent is not working\", \"comment\": \"Hello,\\n\\nThanks for your comment. Can you check the link and send again?\"}", "{\"comment\": \"A similar idea of interpolation has appeared in an early paper by Wany et al. https://arxiv.org/pdf/1802.00168.pdf. The authors should mention related work.\", \"title\": \"A Related Paper by Wang et al.\"}", "{\"title\": \"Concerns After Rebuttal\", \"comment\": \"I really appreciate the authors\\u2019 rebuttal. However, I am not convinced that the proposed Manifold Mixup scheme in its current form can avoid sample collision, for the following reasons.\\n\\nMy understanding is that in order for the proposed method to work as claimed, the network layers below the layer chosen for mixing samples needs to be powerful enough to drive the training error close to zero. This requirement is consistent with one of the public comment posted by one author of AdaMixup, which first introduces and formally analyzes the concept of manifold collision issue in Mixup. Also, this requirement seems to be further confirmed by the authors\\u2019 new observations of \\u201cwe found that mixing in deeper layers successfully reduces training error (with greater reduction for deeper layers)\\u201d. In other words, \\u201ca randomly chosen layer per minibatch to perform mixup\\u201d as implemented by the current Manifold Mixup approach, will not be able to avoid sample collision. In addition, a ResNet as used by the paper will not be able to attain this collision avoidance goal; a network with tailored layers (with sufficient modeling capability) below the mixup layer is needed. In this sense, unlike Dropout or Batch Normalization, Manifold Mixup is NOT a plug-and-play regularization scheme. \\n\\nThe current form of Manifold Mixup seems to have two conflict objectives to me. On one hand, it requires the mixing layer to be closer to the output layer in order to reduce the collision issue to generate informative representations. On the other hand, mixing close to the output layer will have the negative impact on regularization. A further study on this tradeoff would be very beneficial.\"}", "{\"title\": \"Thanks\", \"comment\": \"\\\"I am satisfied with your rebuttal (which makes things clearer about which layer should be selected, and the connection with vicinity ideas)\\\"\\n\\nIn this case, would you be willing to increase your confidence? If there are central parts of the paper that you're still uncertain about, we'd be happy to provide more details or conduct additional experiments. \\n\\n\\\"Proper and rigorous acknowledgement of the existing literature with accurate up-to-date bibliographic details is the bare minimum one could ask from a scientific paper (submitted for publication!), and blaming it on Google Scholar is not very serious :)\\\"\\n\\nI apologize for this. It did require some effort to find the proper citations for some conferences, but I agree that it is important.\"}", "{\"title\": \"Feedback on Rebuttal\", \"comment\": \"Hello,\\n\\nCan you give any feedback on our rebuttal? We've tried to address specific concerns, especially related to the effect of varying the alpha hyperparameter. Additionally we fixed the preprint issue. \\n\\nIf there is anything else that we could do that would effect your confidence or views on the paper, we'd be happy to take a look at it.\"}", "{\"title\": \"Feedback on Rebuttal\", \"comment\": \"Hello,\\n\\nCan you give any feedback on our rebuttal?\"}", "{\"title\": \"Feedback on Response\", \"comment\": \"Hello,\\n\\nIn your most recent response, you mentioned issues related to the flattening of the manifold and the training loss. Can you give any feedback on our response to these issues? We really appreciate it.\"}", "{\"title\": \"Rebuttal Summary and Highlights\", \"comment\": \"We thank the reviewers for their feedback, and we believe that it\\u2019s done a great deal to help us to make the paper better. We want to provide a summary of the new results that we\\u2019ve produced and how they relate to reviewer feedback.\\n\\n1. Novelty: In our opinion, Manifold Mixup works through a mechanism which is very different from Input Mixup, and we have analyzed this both theoretically (section 3) and empirically (Spectral Analysis in Appendix I). Note that this analysis is completely different from the analysis in Mixup, and is indeed almost totally unrelated to that work as it strongly relies on the states being learned. The way that Manifold Mixup changes the representations is very different from other state of the art regularizers including dropout, batch normalization, injecting noise, input mixup, and weight decay (as shown in Figure 1 and Figure 6). \\n\\n2. Flattening of the class conditioned manifolds: Many reviewers were unsure about what we meant by \\u201cflattening\\u201d or were skeptical about whether such an effect could occur even when the exact conditions of the theorem in section 3 didn\\u2019t hold. We have clarified that flattening in the class-specific representations refers to a reduction in the within class variability in some directions. We have added a new Spectral Analysis of the learned representations (Appendix I) which shows that Manifold Mixup significantly reduces many of the class-specific singular values, which essentially results in \\u201cflattening\\u201d of class-conditional manifolds (we found no consistent effect with Input Mixup). Importantly we showed that this happens even when we do this analysis immediately following the first hidden layer. Intriguingly, we also found that the non-class specific representations (i.e. all representations grouped together) shows no flattening with Manifold Mixup, which is consistent with the intuition of the proof in section 3: that variability is removed in directions which point towards the representations of other classes. \\n\\n3. Inconsistent interpolations or the collision issue in interpolations: Reviewers were unsure about whether Manifold Mixup could successfully reduce \\u201cinconsistent interpolations\\u201d (collision issue, where the interpolation between two samples collide with a sample from other class), especially when the number of classes is large. We ran additional experiments to address this. On CIFAR-100, we found that mixing in deeper layers successfully reduces training error (with greater reduction for deeper layers), Appendix C, Figure 10 (and the same on CIFAR-10, in Figure 9). Additionally we were able to improve over Mixup on the Imagenet dataset, which has 1000 classes. \\n\\n4. Intuitive Explanation of how Manifold Mixup changes the learned hidden states to reduce inconsistent interpolations: We have added a new Appendix H illustrating a toy example of how the hidden states can be reorganized via Manifold Mixup to avoid the inconsistent interpolations (collision issue). Additionally in our response for R2, we conducted a simple experiment where we treat the hidden states as independent learned parameters (initialized randomly) to show that gradient descent can separate the classes even when initialized so that they completely overlap initially:\", \"https\": \"//media.giphy.com/media/24lp18V63om8G0dvvg/giphy.gif\\n\\n\\n5 . Analysis of Hyperparameters \\u201calpha\\u201d and the layers in which mixing is done: As suggested by the reviewers (especially R1 and R2), we conducted additional experiments to address these questions. In appendix J, we show that Manifold Mixup improves over Input Mixup works for a wide range of alpha values. Furthermore, in appendix K, we show that mixing in multiple layers improves test accuracy.\"}", "{\"title\": \"Regarding the Visualization in Figure 4\", \"comment\": \"\\u201c3.\\tAny thoughts on my comment regarding Figure 4, which seems to be an indication of collision issue? On the other hand, I have to admit that my observation could be a bit subjective in this case.\\u201d\\n\\nI agree with your basic intuition here, that with Manifold Mixup (Figure 4), many of the points which are given 80% probability of class A actually seem to only have features from class A - for example the fox in the bottom right. However near the middle, it clearly has a mix of semantically meaningful car and fox attributes. While I agree that this is important, this is not a collision issue because, although the interpolated points look somewhat unrealistic, they do not look like points from classes other than the two classes being interpolated.\\n\\nAnother thing to keep in mind is how this visualization was created. The images shown in the figure 4 are the mappings of interpolated hidden space to the input space using a learned decoder network trained to predict real data points from their hidden states (using square loss). As you pointed out in your previous comment \\u201cfor mixing ratio of 0.6 (meaning the created image has almost half labels from the two original images), MixUp clearly shows, for instance in the second row, that there are two overlapped images (Horse and Plane), but Manifold Mixup seems to have only the Plane in the mixed image with a soft label. \\u201d , it is true that some of the images seem to have attribute from only one class, but will be given soft labels 50% from class A and 50% from class B. However, this may be a limitation of the decoder network we used and its fidelity. That is, it is possible that the decoder network was not able to map the interpolated hidden space to an image with 10% horse attributes, even though the interpolated hidden space had some attributes from the horse class.\"}", "{\"title\": \"Addressing Main Concerns (2)\", \"comment\": \"\\\"I wonder if the following suggestions could further help improve the paper ... Plot all the training loss (with synthetic samples) in the supervised cases.\\u201d\\n\\nWe added a new plot to the paper (Figure 9 of Appendix C) showing the training loss curves for mixing in different levels on CIFAR-10 (each experiment used alpha=2.0, model is PreActResNet18). The results of this are clear and consistent over the course of training. At the end of the 30th epoch of training, the train losses are: \\n\\n{0}: 0.155\\n{1}: 0.146\\n{2}: 0.127\\n{3}: 0.112\\n{0,1,2,3}: 0.144\\n{0,1,2,3}, blocking before mixing: 0.164\\n\\nThus we can see that relatively early in training, the lowest training error comes from mixing in the deepest layer and a much higher loss comes from mixing in the input layer. Intriguingly, an even higher loss is obtained from mixing in a random layer, but blocking the gradient before the mixing layer. \\n\\n\\u201c2.Provide more analysis on the Cifar100 data. This is because I suspect that the collision issue could be worse when handling datasets with a large number of classes. Ideally, it would be very convincing to have results from ImageNet, which has 1000 classes. BTW, this is also a suggestion from AnonReviewer3. I think it is a really good suggestion\\u2026.\\u201d\", \"we_did_two_things_to_address_this\": \"first we analyzed the training loss for CIFAR100, showing that mixing in deeper layers greatly reduces training loss (Figure 10 of appendix C)\\n\\nMixing in different layers on CIFAR-100 (train cross-entropy/error at 30 epochs): \\n\\n{0}: 0.0357\\n{1}: 0.0341\\n{2}: 0.0332\\n{3}: 0.0276\\n{0,1,2,3}: 0.0333\\n\\nSecondly, we have results on ImageNet showing significant improvement with Manifold Mixup. Imagenet has some unique challenges. Perhaps most importantly, distributed training on imagenet typically uses very large batch sizes. Manifold Mixup samples a lot of variables randomly once per minibatch in our usual formulation (i.e. one lambda sampled per batch and one layer) and in practice we found that on imagenet this led to a lot of variance between the updates and slowed down training. To address this we sampled a different lambda for each pair of examples in the batch, which made the loss curves much smoother and made convergence similar to Input Mixup. With all models we used the same hyperparameters except for the choice of layer to mix in, and we used alpha=0.2. In all cases we used a ResNet50 and trained for 200 epochs. While our baseline is somewhat weak, these results suggest that Manifold Mixup can still outperform Input Mixup when the number of classes is large.\", \"model\": \"Top-1 Validation Accuracy / Top-5 Validation Accuracy\", \"baseline\": \"75.462 / 92.628\", \"input_mixup\": \"75.944 / 92.844\\nManifold Mixup {0,1,2}: 76.102 / 92.870\\nManifold Mixup {0,3}: 76.032 / 92.906\\n\\nResNet50 is a relatively small model for Imagenet, and we hope that the performance gain (improvement in test accuracy) with Manifold mixup will be more with larger models, since we achieved significantly larger gains when using larger models and training for longer on CIFAR10 and CIFAR100.\\n\\n\\u201cIn short, the paper\\u2019s main novelty and contribution is addressing the collision issue in Mixup. I am willing to increase my score if you could address my main concern here. \\u201c\\n\\nThe primary goal of the paper is to show how mixing in the hidden layers helps to learn better feature representations. It is true that as a consequence, this can reduce underfitting relative to Mixup (by avoiding the collision issue), but this is only one consequence. In our view the more interesting and novel consequence is that this causes a flattening of the learned class-specific representations (see section 3 and Appendix I especially) which encourages the features to be more discriminative.\"}", "{\"title\": \"Addressing Main Concerns (1)\", \"comment\": \"\\u201cI am glad to see that your method is less sensitive to the pre-defined Alpha than Mixup, so I think you may want to further emphasize that in your paper. Also, the results from SVHN are helpful, though I did not see them in the revision. It would be beneficial to have these good results in the paper.\\u201d\\n\\nThanks, we just added SVHN results (Table 6) to the revision as well as the new analytical experiments as you\\u2019ve suggested (Appendices J&K). \\n\\n\\u201cit may be a good idea to further clarify ... the meaning of \\u2018flatten\\u2019\\u201d\\n\\nWhen we refer to flattening, we mean that the class-specific representations have reduced variability in some directions. Our new spectral analysis in Appendix I makes this more concrete and general, in that we ran a singular value decomposition on the class-specific representations, and found that most of the singular values were greatly reduced. This has a specific geometric interpretation in which the shape of the class-specific representations can be seen as being more like an ellipsoid (where many singular values are smaller) and less spherical. Thus we can see it as a flattening. Note also that our SVD analysis in Appendix I confirms that there is only a class-specific flattening, and the overall representation space is not flattened, which is in line with the intuition that variability is removed in directions which point towards the representations of other classes. \\n\\nSection 3 characterizes a sufficient condition for inconsistent interpolations to be avoided in which some directions lose variability completely, and thus we can see those directions as \\u201cflattened\\u201d. \\n\\n\\u201cHowever, my main concern ... is the claim that the synthetic interpolations generated by Manifold Mixup will not collide with a real sample ... at the earlier stages of the training, collided or conflicted samples are used to chase the \\u201cflatten manifold\\u201d goal. ... training loss could be high, which may prevent you from \\u201cflattening\\u201d the manifolds.\\u201d\\n\\nThanks for bringing this point up. If we understand correctly, your point is that even if hypothetically the points could be arranged to avoid collisions (i.e. section 3), this could be difficult to achieve in practice especially in the early parts of training where performance on the task is poor. \\n\\nLet us suppose X1, X3, X3 are examples from three different classes A, B and C respectively. And let us suppose h1, h2 and h3 are the hidden representations of these samples at layer \\\"h\\\" . Now let us suppose we interpolate h1 and h2 such the interpolation collides with the h3. That is h_interpolated = lambda*h1 + (1-lambda)*h2 = h3 .\\n\\nNow assuming that we are training the model with these two samples (h_interpolated, lambda*A + (1-lambda)*B) and (h3, C).\\n\\nIt is indeed a collision for the layers above the layer \\\"h\\\", since they are being fed with the equal h-representation but different outputs. But since, when we do the parameter update, the gradient passes through the entire network, and hence the network below the layer \\\"h\\\" will adapt itself such that both the samples (h_interpolated, lambda*A + (1-lambda)*B) and (h3, C) are satisfied. And this can be done only if h1, h2 and h3 are changed in such a way that interpolation between h1 and h2 (h_interpolated) does not collide with h3. So supposedly, if in the next update, the interpolation happens between the hidden states of same samples X1 and X2, it will not collide with the hidden state of sample X3.\\n\\nIt is worth noting that while section 3 gives sufficient conditions for this to happen perfectly, in practice, the model should be able to minimize the inconsistent interpolation problem even if it can\\u2019t do it perfectly. For example, the highest loss type of inconsistent interpolation would involve interpolating between two points from class A and that overlapping with a point from class B. This point would be given labels of 100% A (interpolating) and 100% B (on that real point) which would lead to very high training error. But if the model could move the B point slightly off the interpolation, it would reduce the loss quite a lot even if the interpolations aren\\u2019t perfectly consistent. \\n\\nWe also conducted a simple new experiment resulting in some new animated gifs, where we treat each hidden state as a learnable parameter (plus a small amount of gaussian noise) and where they are all initialized randomly, such that the two classes are totally entangled initially. We can see that the gradient easily can learn to pull the two classes apart, even though the initial states overlap completely on the first step. (note that I think this process will be even easier in a high dimensional space). This provides some insight into what happens in the scenario that you\\u2019re concerned about - where the states are very noisy at the beginning of training:\", \"https\": \"//media.giphy.com/media/1wXeQi6xHO4UKMnG5s/giphy.gif\"}", "{\"title\": \"Thanks for your valuable feedback (2)\", \"comment\": \"\\\"3) I actually do not see the value of pick a random layer to do mixUp. It seems to me that as long as at one layer the network is capable of transforming the data into the \\\"flattened\\\" manifold (using your word, but actually I think it is not the correct word), that is sufficient. Is there any principle underlying such a randomized strategy? \\\"\\n\\nOn the 2d spiral dataset, it worked well to only mix in a single hidden layer, but in general it's difficult to know which layer to mix in. Mixing later will do a better job of avoiding the intrusion / inconsistent interpolation problem, but may have a more limited effect in terms of regularization on some datasets (which is confirmed experimentally in our response to Reviewer #1 where mixing in multiple layers helped). \\n\\n\\\"\\\"flattened\\\" manifold (using your word, but actually I think it is not the correct word)\\\"\\n\\nWhat we mean by this is that variability is removed in some directions (thus these directions become more \\\"flat\\\"), and this flattening only occurs in the class-specific representations, and the intuition is that variability is reduced in directions which point towards examples from other classes. The spectral analysis in appendix I provides support for this empirically and the theory in section 3 characterizes this flattening. \\n\\n\\\"4) I am not too convinced that your approach outperforms AdaMixUp. (I may look protective of our own, but who would easily accept defeat ? :) Did you compare your scheme with AdaMixUp on the same network structure (say, on CIFAR 100)? And the resNet result you present to compare with ours does not use a network having the same number of layers with ours.\\\"\\n\\nWe have updated the paper, particularly Table 1 to be more clear on this. Using the same architecture (ResNet18), Manifold Mixup is better on CIFAR-10 but slightly worse on CIFAR-100 on the same architecture but performs better when using a somewhat deeper network (ResNet34). \\n\\nOne thing is that I don't see why the methods couldn't be used together. You could use Adamix when you mix in the input layer and not use it when you mix in the later layers (as in manifold mixup). Perhaps for the 1st or 2nd layer you could use a weakened version of Adamix. \\n\\nI think the key thing is that both methods improve end-results, but work through very different mechanisms, and have overlapping but distinct goals and priorities. I think this is how research ought to be. When you have two methods like \\\"dropout\\\" and \\\"batch normalization\\\", both do act as regularizers and can improve test accuracy, but their mechanisms and motivations are different, and it's important for the community to understand these mechanisms so that they can know where and how to apply them and what to work on in the future. \\n\\n\\\"5) I believe that our adaMixUp paper is the first work that pinpointing the manifold intrusion/underfitting problem of MixUp. I think you should give us the deserving credits :)\\\"\\n\\nWe agree that it is very important to give proper credit here. While our preprint was released earlier, it has seen significant revisions after the release of the AdaMix paper and these have more heavily emphasized the concept of intrusion which was formally introduced and received a thorough treatment in the AdaMix paper.\"}", "{\"title\": \"Thanks for your valuable feedback (1)\", \"comment\": \"\\\"I am glad that you cited our AdaMixUp paper (to appear in AAAI 2019). ... Our approach, AdaMixUp deals with this problem in the input space, and you deal with it in a latent-representation space. \\\"\\n\\nI think our main goal is to show that by trying to avoid intrusion, Manifold Mixup learns better hidden representations. The key thing is the way that the hidden representations are themselves changed. We demonstrate that this change consists of a flattening of the class-specific representations in theory (section 3), empirically (through the spectral analysis in Appendix I), and on toy datasets where the states can be visualized (Figure 1). Are these flattened representations better? One strength is that they encourage the features in earlier layers to be more discriminative and another is that they make the features from real data points more concentrated, which means that more of the hidden space can be assigned lower confidence. \\n\\n\\\"1) In order for your scheme to work as desired, it is required that the network has sufficient capacity when transforming the data into a given latent space/layer. \\\"\\n\\nSo section 3 gives sufficient conditions for Manifold Mixup to attain zero loss, but we have experimental evidence that this flattening occurs even when these conditions aren't satisfied exactly (especially the spectral analysis in appendix I). Intuitively this is what we'd expect, because even if the number of hidden dimensions is too small, the different classes could be placed on the surface of a regular polygon, so that interpolations at least can avoid intersecting with real data points (you can actually see this happening in Figure 7 where we used 2 hidden units and 5 classes). \\n\\nAt the same time, there is another motivation to Manifold Mixup unrelated to intrusion, which is that the higher level hidden layers will generally learn more semantically meaningful features, so interpolating that space will produce more meaningful mixes (and perhaps more similar to the points that can occur in the test set). \\n\\nI think there is significant strength in AdaMix's approach of still mixing in the input space but avoiding intrusions by learning where not to mix. One significant strength is that intrusion can be avoided while mixing still occurs entirely in the input space - which may provide stronger regularization than Manifold Mixup in some cases. For example, I wouldn't be surprised if AdaMix helped a lot with adversarial robustness (as robustness is usually defined in terms of the input space). \\n\\n\\\"2\\uff09 Continuing from above, in practice we use finite-capacity networks. ... One measure that can reveal the answer to this is the loss associated with the mixUp samples. If the mixUp loss can be driven to sufficiently low, it says that you more or less have succeeded in avoiding manifold intrusion. But I do not seem to see you show the curve of this loss. \\\"\\n\\nWe added a new plot to the paper (Figure 9) showing the training loss curves for mixing in different levels on CIFAR-10 (each experiment used alpha=2.0). The results of this are clear and consistent over the course of training. At the end of the 30th epoch of training, the train losses are: \\n\\n{0}: 0.155\\n{1}: 0.146\\n{2}: 0.127\\n{3}: 0.112\\n{0,1,2,3}: 0.144\\n{0,1,2,3}, blocking before mixing: 0.164\\n\\nThus we can see that relatively early in training, the lowest training error comes from mixing in the deepest layer and a much higher loss comes from mixing in the input layer. Intriguingly, an even higher loss is obtained from mixing in a random layer, but blocking the gradient before the mixing layer.\"}", "{\"title\": \"Spectral Analysis of the Learned Representations\", \"comment\": \"Hello,\\n\\nSeveral reviewers have discussed the nature of the \\\"flattening\\\" of representations accomplished by manifold mixup. Previously our paper had theoretical results (section 3) and visualizations on toy problems with a 2-dimensional hidden layer. To improve on this, we conducted a new analysis based on the singular value decomposition (SVD) of the representations in a hidden layer. The goal here is to produce a precise empirical characterization of the flattening effect of Manifold Mixup. This is added in our new Appendix I (Figures 11-14). \\n\\nWe trained fully-connected models with a bottleneck hidden state (of either 12 or 30 dimensions) on the MNIST dataset. We considered placing this bottleneck state after 3 hidden layers and after a single hidden layer. We then performed SVD on those hidden representations to recover the singular values, which we plotted. We found the effect to be quite strong in both cases: Manifold Mixup reduces the value of the smaller singular values in the class-specific representations. This suggests that many directions have been 'flattened', and these directions with variance have been removed. At the same time, when we look at the singular values of the set of all representations (not class specific) we see no clear difference between Manifold Mixup and the Baseline - which is in accordance with the intuition of the proof in section 3 - that variability which is removed is the variability which points in the direction of other classes. In general the effect of Input Mixup on the singular values was inconsistent, which provides even more evidence that Manifold Mixup operates by a very different mechanism. Both Manifold Mixup and Input Mixup reduce the size of the largest singular value (spectral norm). \\n\\nFinally, when placed after a single hidden layer (which we would expect to be a somewhat weak model, and not able to solve the task completely), we still observed a clear flattening effect from Manifold Mixup (Figure 13) but less than when mixing is done in later layers. \\n\\nWhen we have referred to flattening, we have meant that the number of dimensions with variability is reduced, and the theory in section 3 gives some conditions for this to happen. At the same time, this new analysis gives us another way of thinking about flattening in terms of the geometric interpretation of singular value decomposition. You can think of the U and V matrices as rotations and the singular values (sigma) as a rescaling along dimensions. Thus reducing some of these singular values can be seen as a flattening effect in those directions.\"}", "{\"title\": \"my main concern remains and I am willing to increase my score if you could address it.\", \"comment\": \"Thank you for your feedback on my review. It addressed some of my concerns. Nevertheless, I am still not fully convinced that the proposed method address Mixup\\u2019s collision issue as claimed in your paper\\u2019s Abstract.\\n\\nI am glad to see that your method is less sensitive to the pre-defined Alpha than Mixup, so I think you may want to further emphasize that in your paper. Also, the results from SVHN are helpful, though I did not see them in the revision. It would be beneficial to have these good results in the paper.\\n\\nHowever, my main concern of the paper still remains. That is the claim that the synthetic interpolations generated by Manifold Mixup will not collide with a real sample. The new Section H really makes your point much clearer, but below please find my argument. \\n\\nI agree that if all class manifolds can be \\u201cflattened\\u201d (BTW, it may be a good idea to further clarify or define the meaning of \\u201cflatten\\u201d), the collision issue can be addressed. However, before reaching the goal of \\u201cflatten\\u201d manifolds, your method in fact uses mixed synthetic samples for training. That is, at the earlier stages of the training, collided or conflicted samples are used to chase the \\u201cflatten manifold\\u201d goal. This means that the model is trained with very noisy data, which may contain synthetic, soft-labeled samples which are intersected or collided with other real samples. When training with collided samples, your training loss could be high, which may prevent you from \\u201cflattening\\u201d the manifolds. This issue could be worse when coping with data with a large number of classes such as Cifar100 or ImageNet.\\n\\nI wonder if the following suggestions could further help improve the paper.\\n1.\\tPlot all the training loss (with synthetic samples) in the supervised cases.\\n2.\\tProvide more analysis on the Cifar100 data. This is because I suspect that the collision issue could be worse when handling datasets with a large number of classes. Ideally, it would be very convincing to have results from ImageNet, which has 1000 classes. BTW, this is also a suggestion from AnonReviewer3. I think it is a really good suggestion.\\n3.\\tAny thoughts on my comment regarding Figure 4, which seems to be an indication of collision issue? On the other hand, I have to admit that my observation could be a bit subjective in this case.\\n\\nIn short, the paper\\u2019s main novelty and contribution is addressing the collision issue in Mixup. I am willing to increase my score if you could address my main concern here.\"}", "{\"comment\": \"I am an author of AdaMixup, I am going to post this review with my name revealed :)\\n\\nI am glad that you cited our AdaMixUp paper (to appear in AAAI 2019). In fact, your work and ours deal with the same problem, the problem we call \\\"manifold intrusion\\\" (where by manifold, we mean data manifold, different from what you mean here.) The problem is that when using the conventional mixup as a regularization scheme, there is no guarantee that training using mixed samples doest not conflict with training using the original samples. Such a conflict, when arising, results in under-fitting. Our approach, AdaMixUp deals with this problem in the input space, and you deal with it in a latent-representation space. \\n\\nBasically, in my understanding, your approach is to force, hopefully, the network ```to \\\"untangle\\\" the data in the latent space so that interpolated samples in the latent space does not collide with the original samples in their training objectives. Overall I think this is an interesting idea and a promising direction, but I do have a few questions, some perhaps more essential than others. \\n\\n1) In order for your scheme to work as desired, it is required that the network has sufficient capacity when transforming the data into a given latent space/layer. (You theorem only holds under such an infinite capacity assumption). This implies either a) the appropriate choice of representation layer (on which you apply mixUp) is somewhere near the output, or b) the initial layers of the network are sufficiently complex (e.g. wide). When the overall network isn't very deep and not too wide (but still overfits), there is no guarantee that your scheme will work as an effective regularization scheme. That is, there is no guarantee that the network architecture up to the latent representation layer is capable fitting both the mixUp objective and the regular training objective. In other words, I suppose that your scheme may not be compatible with certain network architectures. This seems to be a disadvantage of your approach comparing to AdaMixUp.\\n\\n2\\uff09 Continuing from above, in practice we use finite-capacity networks. Then it is not clear to what extent the manifold intrusion (or under-fitting) is resolved with your approach. One measure that can reveal the answer to this is the loss associated with the mixUp samples. If the mixUp loss can be driven to sufficiently low, it says that you more or less have succeeded in avoiding manifold intrusion. But I do not seem to see you show the curve of this loss. \\n\\n3) I actually do not see the value of pick a random layer to do mixUp. It seems to me that as long as at one layer the network is capable of transforming the data into the \\\"flattened\\\" manifold (using your word, but actually I think it is not the correct word), that is sufficient. Is there any principle underlying such a randomized strategy? \\n\\n4) I am not too convinced that your approach outperforms AdaMixUp. (I may look protective of our own, but who would easily accept defeat ? :) Did you compare your scheme with AdaMixUp on the same network structure (say, on CIFAR 100)? And the resNet result you present to compare with ours does not use a network having the same number of layers with ours.\\n\\n5) I believe that our adaMixUp paper is the first work that pinpointing the manifold intrusion/underfitting problem of MixUp. I think you should give us the deserving credits :)\", \"title\": \"Interesting and Promising\"}", "{\"title\": \"Thanks for feedback - is there anything additional that we could do?\", \"comment\": \"Hello,\\n\\nThanks again for your feedback. Our new experiments directly address the empirical questions for #3/#4 (effect of alpha and SVHN). We also ran a new experiment for Reviewer-1 which studied the effect of the choice of layers to mix in. \\n\\nFor the conceptual issues about manifold mixup (#1/#2), is there any chance that you could give us more details or feedback on them? This is really important to us, and if anything is in error or not argued convincingly, it would be great to understand better. \\n\\nAre there any experiments (especially related to the conceptual properties of manifold mixup) that you would be interested in or that would make the arguments more convincing or that would resolve any remaining issues? \\n\\nYour feedback has already been very helpful in making the paper better (for example, the new appendix H and Figure 10 illustrating how inconsistent interpolations can be avoided) and if you have any more feedback it could be really helpful for us.\"}", "{\"title\": \"Thanks for your feedback\", \"comment\": \"R3:\\n\\n\\u201cAlthough their work is not extremely novel, the experiments and observations could serve as a useful extension to this line of research. \\u201c\\n\\nAlthough novelty is subjective, there is a case that the work is actually quite novel: \\n\\n 1) We present a novel analysis of how manifold mixup changes representations (section 3) which is totally different from the motivation of mixup (and indeed deals with a completely different problem, as the inputs in input mixup are fixed and cannot be changed by training). \\n\\n 2) The way that the representations are changed by manifold mixup is to our knowledge fairly unique, not just relative to mixup, but compared to other regularizers as well. For example if you look at Figure 1 and Figure 6 in appendix B, you\\u2019ll see that the way the representations are changed by manifold mixup is not accomplished by four common regularizers: weight decay, batch normalization, dropout, and adding noise to the hidden states. The representations look completely different, even though all of the methods succeed (to some extent) as regularizers. More concretely, manifold mixup has the fairly unique effect of concentrating the hidden states of the points from each class and encouraging the hidden state to have broad areas of low confidence between those regions. This is not accomplished to any appreciable degree by the other regularizers. This is some evidence that the method by which manifold mixup achieves regularization is fairly unique and worthy of further study. \\n\\n\\u201cThe associated functions represented by 'f', 'g' and 'h' change meaning between sec. 2 and sec. 3. It would be more smooth if some consistency in notations was maintained.\\u201d\\n\\nThanks, that\\u2019s a good catch. Our intent was for g to refer to the earlier part of the network and for f to refer to the later part of the network. We\\u2019ve fixed the notation and uploaded an updated version of the paper.\"}", "{\"title\": \"Thanks for your Feedback\", \"comment\": \"Remarks 1/2 are addressed in the previous comment \\\"Motivation for why Manifold Mixup Works\\\".\", \"remark_3\": \"\\u201cI wonder how sensitive is the parameter Alpha in Manifold Mixup. \\u201c\\n\\nWe didn\\u2019t tune alpha very carefully, and used alpha=2.0 in all cases except for supervised learning with the large PreResNet152, where we performed better with larger alphas. Our general experience is that manifold mixup helps over a wide range of alphas but manifold mixup benefits more from larger alphas, especially when using a larger model. \\n\\nNonetheless we performed a new experiment for the rebuttal where we trained a PreResNet18 on CIFAR-10 with a range of alphas. \\n\\nBaseline (no mixing): 93.21%\\n\\nManifold Mixup (\\u03b1=0.5): 96.12%\\nMixup (\\u03b1=0.5): 95.75%\\n\\nManifold Mixup (\\u03b1=1.0): 96.10%\\nMixup (\\u03b1=1.0): \\t 95.84%\\n\\nManifold Mixup (\\u03b1=1.2): 96.29%\\nMixup (\\u03b1=1.2): \\t 96.09%\\n\\nManifold Mixup (\\u03b1=1.5): 96.35%\\nMixup (\\u03b1=1.5): 96.06%\\n\\nManifold Mixup (\\u03b1=1.8): 96.45%\\nMixup (\\u03b1=1.8): \\t 95.97%\\n\\nManifold Mixup (\\u03b1=2.0): 96.73%\\nMixup (\\u03b1=2.0): \\t 95.83%\\n\\nManifold Mixup outperformed Input Mixup for all alphas in the set (0.5, 1.0, 1.2, 1.5, 1.8, 2.0) - indeed the lowest result for Manifold Mixup is better than the worst result with Input Mixup. Note that Input Mixup\\u2019s results deteriorate when using an alpha that is too large, which is not seen with manifold mixup.\", \"remark_4\": \"\\u201cIt would be useful to also present the results for SVHN for supervised learning since the Cifar10 and Cifar100 datasets are similar, and the authors have already used SVHN for other task in the paper.\\u201d\\n\\nWe ran new experiments on SVHN using the training set without the \\u201cextra\\u201d data. We used PreActResNet-18 and used the exact same setup as with CIFAR-10.\", \"method\": \"Test Accuracy\\nManifold Mixup (\\u03b1=2.0): 98.10\\nManifold Mixup (\\u03b1=1.5): 98.08\\nInput Mixup (\\u03b1=1.5): 97.59\\nInput Mixup (\\u03b1=1.0): 97.63\\nInput Mixup (\\u03b1=0.5): 97.74\\nInput Mixup (\\u03b1=0.2): 97.71\\nInput Mixup (\\u03b1=0.05): 97.72\\nInput Mixup (\\u03b1=0.01): 97.70\", \"baseline\": \"97.78\", \"minor_remark_2\": \"\\u201cWhy not using Cifar100, but with a new dataset SVHN for the semi-supervised learning in section 5.2?\\u201d\\n\\nFor SSL. cifar10 (with 4k labelled samples) and SVHN (1K labelled samples) have emerged as the standard benchmark datasets and they have been used to compare all of the recent state-of-the-art methods, so we followed the same setup. We used the standard semi-supervised setup and used the exact same architectures from (Oliver 2018) \\u201cRealistic Evaluation of Deep Semi-Supervised Learning Algorithms\\u201d, which evaluated on SVHN and CIFAR-10. \\n\\nMinor Remark 1/3: \\u201cIn Table2, the result from AdaMix seems missed\\u2026 AgrLearn missed\\u201d\\n\\n(Note: updated this rebuttal section on 11/21)\\nNote that these were released after our method\\u2019s preprint was released and they cite our method, so this is why we originally did not have it in our related work. Nonetheless the our paper has been updated to discuss Adamix and Agrlearn in the related work. AdaMix reports 3.52% error on CIFAR-10 and 20.97% error on CIFAR-100. AgrLearn reports 2.45% on CIFAR-10 and 20.21% on CIFAR-100. We report 2.38% error on CIFAR-10 and 20.39% error on CIFAR-100. Note that AgrLearn was used together with Input Mixup (Zhang 2018) on CIFAR-10, so their method may also be complementary with Manifold Mixup as well. This could be an interesting area for future work. \\n\\nI think how the methods are related is an interesting question. AdaMix only interpolates in the input space, and they report that their method hurt results significantly when they tried to apply it to the hidden layers. Thus the methods likely work for different reasons and might be complementary.\"}", "{\"title\": \"Thanks for your feedback\", \"comment\": \"\\u201c- There is little discussion in the manuscript about which layers should be eligible to mixup and how such layers get picked up by the algorithm. I would suggest elaborating on this.\\u201d\\n\\nWe performed a new experiment to directly study this. Because the theory in section 3 assumes that the part of the network after mixing is a universal approximator, there is a sensible case to be made for not mixing in the very last layer. \\n\\nFor this experiment, we evaluated PreActResNet18 models on CIFAR-10 and considered mixing in a subset of the layers, we ran for fewer epochs than in the paper (making the accuracies slightly lower across the board), and we decided to fix the alpha to 2.0 as we did in the paper for manifold mixup. We considered different subsets of layers to mix in, with 0 referring to the input, 1/2/3 referring to the output of the 1st/2nd/3rd resblocks respectively. For example {0,2} refers to mixing in the input layer and the output of the 2nd resblock. {} refers to no mixing.\", \"layers\": \"Test Accuracy\\n{0,1,2}: 96.73%\\n{0,1}: 96.40%\\n{0,1,2,3}: 96.23%\\n{1,2}: 96.14%\\n{0}: 95.83%\\n{1,2,3}: 95.66%\\n{1}: 95.59%\\n{2,3}: 94.63%\\n{2}: 94.31%\\n{3}: 93.96%\\n{}: 93.21%\\n\\nEssentially, it helps to mix in more layers, except for the later layers which hurts to some extent - which we believe is consistent with our theory. \\n\\n\\u201c- References: several preprints cited in the manuscript are in fact long-published. I strongly feel proper credit should be given to authors by replacing outdated preprints with correct citations.\\u201d\\n\\nWe\\u2019ve updated all of the references to the conference/journal citations. See the new version of the paper uploaded. In the future it would be nice if arXiv could also list the bibtex for a conference/journal version, because these are often not easy to look up (for example, for older ICLR conferences it was hard to find the bibtex). Google scholar does not help because it often only lists the first instance of the paper, which is usually arXiv.\\n\\n\\u201cI find the manifold mixup idea to be closely related to several lines of work for generalization abilities in machine learning (not just for deep neural networks). In particular, I would like to read the authors' opinion on possible connection to the vicinal risk minimization (VRM) framework, in which training data is perturbed before learning, to improve generalization (see, among other references, Chapelle et al., 2000). I feel it would help improve supporting the case of the manuscript and reach a broader community.\\u201c\\n\\nThe fundamental question of interest to us here is how deep networks behave when evaluated on points which are off of the data manifold. Vicinal risk minimization (Chapelle 2000), which you refer to, definitely seems like an improvement over ERM, but it seems like it\\u2019s very dependent on our ability to select the right \\u201cvicinity\\u201d. \\n\\nOur intuition is that our models should still be able to classify well off of the data manifold (just meaning points x where p_data(x)=0), by identifying factors and structural elements that are shared with the training distribution. VRM can deal with this if the vicinity covers points which are off of the manifold but doesn\\u2019t include points which change the class identity. In practice selecting this can be quite difficult. Defining the vicinity as a spherical-Gaussian around the data points is unlikely to capture much of the space that exists off of the data manifold (or at least, reach these points with reasonable probability) while avoiding class overlap. \\n\\nThe \\u201cAutoAugment\\u201d paper (Cubuk 2018) proposed to learn such augmentations with a neural architecture search procedure (i.e. manually training submodels with different augmentation schemes and selecting those which lead to better generalization), although this is quite expensive and may be difficult to scale beyond a sequence of fixed augmentations.\"}", "{\"comment\": \"That clears up things! Nice paper!\", \"title\": \"Thanks for the response!\"}", "{\"title\": \"PGD\", \"comment\": \"Hello,\\n\\nManifold Mixup improves robustness to the weak FGSM attack but does not provide any robustness to the PGD attack (and I'm guessing for any strong attack). The same is true for mixup. This is actually already mentioned in the text at the end of section 5.3 and there is some discussion there on the intuition for this. \\n\\nOur only goal in including these results is to show that at least in some directions, Manifold Mixup does a better job than Input Mixup at moving the decision boundary away from the data - and not to claim robustness (which would require the decision boundary to move further away in *all* directions). \\n\\nI think that at least one reason why it is not adversarially robust, is that we only consider interpolations between pairs of points, and thus I don't think there's a reason to believe that these points would cover all of the directions that an adversarial perturbation could take around a data point.\"}", "{\"comment\": \"Do the numbers hold up when you replace FGSM by MIM or PGD perhaps?\", \"title\": \"Table 3. FGSM replaced by MIM or PGD?\"}", "{\"title\": \"Motivation for why Manifold Mixup Works\", \"comment\": \"Thank you for your review. We will post a more detailed response with new experimental results soon, but I want to quickly address issues related to the motivation for why manifold mixup works. We also updated the paper with a new appendix section H (page 20) which discusses this in more detail and gives an illustration.\\n\\n\\u201cMixup can suffer from interpolations intersecting with a real sample, but how Manifold Mixup can avoid this issue is not very clear to me \\u2026 The observations of mixing in the hidden space is better than mixing in the input space seem to contradictive to the observations by Mixup, it would be very useful if the paper can make that much clear to the readers\\u201d\\n\\nYou are correct that manifold mixup works through a mechanism which is very different from input mixup, which I think is actually what makes it interesting. \\n\\nWith input mixup, if the interpolations between two points of the same class intersect with points from a different class (or interpolations are inconsistent), this leads to underfitting and poor performance. You can see this in the center column of figure 1. However with manifold mixup, the hidden states of the network are learned, such that these inconsistent interpolations are avoided. \\n\\nTo illustrate, let\\u2019s imagine that you have a binary classification problem with 2 examples from class A and 2 examples from class B. Let\\u2019s suppose that we perform manifold mixup in a single 1-dimensional hidden layer. Let\\u2019s say that the points from A are both at h=0. Where can the points from B be located for the interpolations to all return the same label? If the points from class B have different h values, then the interpolations must be inconsistent. For example if one point from class B is at h=1 and one point from B is at h=2, then the point h=1 will either be labeled as 100% class B or it will be labeled as 50% class B / 50% class A. This will cause manifold mixup to have error, and the only way for it to avoid this is to learn the hidden states such that all examples from each class maps to the same point. This is what needs to happen if we have a 1D hidden space and 2 classes. For higher dimensional hidden spaces, a similar phenomenon occurs but it is much less restrictive. \\n\\nSection 3 provides exact conditions for these inconsistent interpolations to be completely avoided. Essentially, the representations for each class need to \\u201cflatten\\u201d so that they don\\u2019t have any variation in directions which point towards other classes (you can imagine that this would lead to inconsistent interpolations because some points of the same class would have different distances to points from the other classes). Figure 1c/1f shows exactly how this happens in a toy problem. \\n\\nMoreover in section 5.1 we presented an experiment where we train with manifold mixup, but don\\u2019t pass gradient to layers before the layer where we mix (however all layers are still trained, as the layer to mix in is randomly selected on each update) - and this made accuracy much worse. This is strong evidence that it is important for manifold mixup to learn to change the representations to make interpolations consistent. \\n\\nWhy is it desirable for manifold mixup to change the representations to avoid inconsistent interpolations? The first reason is that it can help to avoid underfitting, but another reason is that the way to make interpolations consistent is to make the representations for each class more concentrated, which can only be accomplished by forcing the network to learn more discriminative features in earlier layers. \\n\\nPlease let me know if anything is unclear here, if you\\u2019re uncertain about part of the argument, or if there is any other type of illustration/figure that would be helpful.\"}", "{\"title\": \"The paper is well written and its tone is notably scientific, though the novelty is limited\", \"review\": \"The tone of the paper is notably scientific, as the authors clearly state the assumptions and all observations, whether positive or negative. That said, the approach itself can be seen as a direct extension of the earlier advanced 'mixup' scheme. In addition to performing data augmentation solely in the input space, their method proposes to train the networks on the convex combinations of the hidden state representations by learning to map them to the convex combinations of their one-hot ground truth encodings.\\n\\nThe results are competitive, in most cases exceeding the current state-of-art. However, the scheme has only been tested on low-res datasets such as MNIST, CIFAR and SVHN while the predecessor (plain 'mixup') also demonstrated improvement over the much larger and high-res ImageNet dataset.\\n\\nAlthough their work is not extremely novel, the experiments and observations could serve as a useful extension to this line of research.\", \"suggestions\": \"1. The results on ImageNet would be a useful add-on to really drive home the benefit of their method when we talk of real-world large-scale datasets. \\n2. The associated functions represented by 'f', 'g' and 'h' change meaning between sec. 2 and sec. 3. It would be more smooth if some consistency in notations was maintained.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"review\", \"review\": \"The paper proposes a novel method called Manifold Mixup, which linearly interpolating (with a careful selected mixing ratio) two feature maps in latent space as well as their labels during training, aiming at regularizing deep neural networks for better generalization and robust to adversarial attacks. The authors experimentally show that networks with Manifold Mixup as regularizer can improve accuracy for both supervised and semi-supervised learning, are robust to adversarial attacks, and obtain promising results on Negative Log-Likelihood on held out samples.\\n\\nThe paper is well written and easy to follow. Various experiments are conducted to support the contributions of the paper. Nevertheless, the technical novelty seems a bit weak to me. The method basically moves the interpolating process from input space as in MixUp to randomly selected hidden states. More importantly, some of the paper\\u2019s claims are not very convincing to me in its current form.\", \"major_remarks\": \"1.\\tThe authors suggest that Mixup can suffer from interpolations intersecting with a real sample, but how Manifold Mixup can avoid this issue is not very clear to me. \\nThe authors theoretically prove that with the proposed training cost in Manifold Mixup, the representation for each class will lie on a subspace of dimension dim (h) \\u2013d +1 (h and d are the hidden dimension and number of classes, respectively). I did not get the idea of how such dimension reduction relates to the \\u2018\\u2019flattening\\u2019\\u2019 of the manifold and in particular how such representations (representations for each class \\u201cconcentrating into local regions\\u201d) can avoid the class collision issues as that in Mixup.\\nExperimentally, from Figures 3 and 4, it seems the class collision issue could be worse than that of Mixup. For example, for mixing ratio of 0.6 (meaning the created image has almost half labels from the two original images), MixUp clearly shows, for instance in the second row, that there are two overlapped images (Horse and Plane), but Manifold Mixup seems to have only the Plane in the mixed image with a soft label. \\n\\n2.\\tThe observations of mixing in the hidden space is better than mixing in the input space seem to contradictive to the observations by Mixup, it would be very useful if the paper can make that much clear to the readers. I would suggest that the authors fully compare with MixUp in the supervised learning tasks, namely using all the datasets (including ImageNet) and networks architectures used in MixUp for supervised learning. In this way, the paper would be much more convincing because the proposed method is so close to MixUp and the observation here is contradictive.\\n3.\\tI wonder how sensitive is the parameter Alpha in Manifold Mixup. For example, how the mixing rate Alpha impacts the results for NLL and Semi-supervised learning in section 5.2? \\n4.\\tIt would be useful to also present the results for SVHN for supervised learning since the Cifar10 and Cifar100 datasets are similar, and the authors have already used SVHN for other task in the paper.\", \"minor_remarks\": \"1.\\tIn Table2, the result from AdaMix seems missed.\\n2.\\tWhy not using Cifar100, but with a new dataset SVHN for the semi-supervised learning in section 5.2?\\n3.\\tIn related work, regarding regularizing deep networks by perturbing the hidden states, the proposed method may relate to AgrLearn (Guo et al., Aggregated Learning: A Vector Quantization Approach to Learning with Neural Networks) as well.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Good paper\", \"review\": \"TL;DR. a generalization of the mixup algorithm to any layer, improving generalization abilities.\\n\\n* Summary\\n\\nThe manuscript generalizes the mixup algorithm (Zhang et al., 2017) which proposed to interpolate between inputs to yield better generalization. The present manuscript addresses a fairly more general setting as the mixup may occur at *any* layer of the network, not just the input layer. Once a layer is chosen, mixup occurs with a random proportion $\\\\lambda\\\\in (0,1)$ (sampled from a $\\\\mathrm{Beta}(\\\\alpha,\\\\alpha)$ distribution).\", \"a_salient_asset_of_the_manuscript_is_that_it_avoids_a_pitfall_of_the_original_mixup_algorithm\": [\"interpolating between inputs may result in underfitting (if inputs are far from each others: the interpolation may overlap with existing inputs). Interpolating deep layers of the networks makes it less prone to this phenomenon.\", \"A sufficient condition for Manifold Mixup to avoid this underfitting phenomenon is that the dimension of the hidden layer exceeds the number of classes.\", \"I found no flaw in the (two) proofs. Literature is well acknowledged. In my opinion, a clear accept.\", \"Major remarks\", \"There is little discussion in the manuscript about which layers should be eligible to mixup and how such layers get picked up by the algorithm. I would suggest elaborating on this.\", \"References: several preprints cited in the manuscript are in fact long-published. I strongly feel proper credit should be given to authors by replacing outdated preprints with correct citations.\", \"I find the manifold mixup idea to be closely related to several lines of work for generalization abilities in machine learning (not just for deep neural networks). In particular, I would like to read the authors' opinion on possible connection to the vicinal risk minimization (VRM) framework, in which training data is perturbed before learning, to improve generalization (see, among other references, Chapelle et al., 2000). I feel it would help improve supporting the case of the manuscript and reach a broader community.\", \"Minor issues\", \"Tables 1 and 3: no confidence interval / standard deviation provided, diminishing the usefulness of those tables.\", \"Footnote, page 4: I would suggest to add a reference to the consistency theorem, to improve readability.\"], \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}" ] }
ryf6Fs09YX
GO Gradient for Expectation-Based Objectives
[ "Yulai Cong", "Miaoyun Zhao", "Ke Bai", "Lawrence Carin" ]
Within many machine learning algorithms, a fundamental problem concerns efficient calculation of an unbiased gradient wrt parameters $\boldsymbol{\gamma}$ for expectation-based objectives $\mathbb{E}_{q_{\boldsymbol{\gamma}} (\boldsymbol{y})} [f (\boldsymbol{y}) ]$. Most existing methods either ($i$) suffer from high variance, seeking help from (often) complicated variance-reduction techniques; or ($ii$) they only apply to reparameterizable continuous random variables and employ a reparameterization trick. To address these limitations, we propose a General and One-sample (GO) gradient that ($i$) applies to many distributions associated with non-reparameterizable continuous {\em or} discrete random variables, and ($ii$) has the same low-variance as the reparameterization trick. We find that the GO gradient often works well in practice based on only one Monte Carlo sample (although one can of course use more samples if desired). Alongside the GO gradient, we develop a means of propagating the chain rule through distributions, yielding statistical back-propagation, coupling neural networks to common random variables.
[ "generalized reparameterization gradient", "variance reduction", "non-reparameterizable", "discrete random variable", "GO gradient", "general and one-sample gradient", "expectation-based objective", "variable nabla", "statistical back-propagation", "hierarchical", "graphical model" ]
https://openreview.net/pdf?id=ryf6Fs09YX
https://openreview.net/forum?id=ryf6Fs09YX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "SJxL88VrbH", "S1l2DW1O1H", "B1eUsWyUJr", "BJxo5-bLeV", "rkeOMIWcCm", "Skx3km-FpQ", "ryxx8ZbFa7", "S1eUAxbKa7", "ryerVgbtT7", "rJlsvSSchm", "HygWY_1c2X", "rklz9YLKh7", "B1ef1_Gg57", "SklUUmJxcX" ], "note_type": [ "comment", "official_comment", "comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "comment" ], "note_created": [ 1562883662310, 1560961379656, 1560830366354, 1545109907174, 1543276048027, 1542161123907, 1542160712173, 1542160590106, 1542160429101, 1541195106770, 1541171321438, 1541134730510, 1538430938223, 1538417486217 ], "note_signatures": [ [ "~Wu_Lin2" ], [ "ICLR.cc/2019/Conference/Paper490/Authors" ], [ "~Wu_Lin2" ], [ "ICLR.cc/2019/Conference/Paper490/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper490/Authors" ], [ "ICLR.cc/2019/Conference/Paper490/Authors" ], [ "ICLR.cc/2019/Conference/Paper490/Authors" ], [ "ICLR.cc/2019/Conference/Paper490/Authors" ], [ "ICLR.cc/2019/Conference/Paper490/Authors" ], [ "ICLR.cc/2019/Conference/Paper490/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper490/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper490/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper490/Authors" ], [ "(anonymous)" ] ], "structured_content_str": [ "{\"comment\": \"Please see Definition 1. For example, if h(z) is a Relu function, the current implementation for computing the gradient is correct since the gradient exists almost everywhere except the null set {0}.\", \"title\": \"Definition of $\\\\nabla_{z_j} h(z)$\"}", "{\"title\": \"Thanks a lot for the reminder\", \"comment\": \"Thanks for reminding your interesting work. We agree that those smoothness assumptions mentioned would help with a more rigorous mathematical foundation for the GO gradient. Also, it would be appreciated if you could mention our paper like in your Theorem 3.\", \"a_quick_question_about_weakening_the_smoothness_condition_of_the_go_gradient\": \"in your Theorem 4, what's the definition of $\\\\nabla_{z_j} h(z)$ when $h(z)$ is not continuously differentiable? Thanks.\"}", "{\"comment\": \"You may look at our poser https://github.com/yorkerlin/VB-MixEF/blob/master/poster_workshop.pdf for the ICML workshop on Stein's method, where we weaken the smoothness assumption of the implicit reparameterization gradients. In other words, we also weaken the smoothness condition of the GO gradient for continuous cases. In our poster, we only focus on the exponential family. However, the idea can be readily extended to general continuous univariate distribution. For multivariate case, you can use Theorem 4 in our poster.\", \"title\": \"Weakening the smoothness condition of the implicit reparameterization gradients\"}", "{\"metareview\": \"This clearly written paper develops a novel, sound and comprehensive mathematical framework for computing low variance gradients of expectation-based objectives. The approach generalizes and encompasses several previous approaches for continuous random variables (reparametrization trick, Implicit Rep, pathwise gradients), and conveys novel insights.\\nImportantly, and originally, it extends to discrete random variables, and to chains of continuous random variables with optionally discrete terminal variables. These contributions are well exposed, and supported by convincing experiments.\\nQuestions from reviewers were well addressed in the rebuttal and helped significantly clarify and improve the paper, in particular for delineating the novel contribution against prior related work.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"A comprehensive mathematical framework for unbiased low variance gradient estimator that applies to continuous and discrete random variables\"}", "{\"title\": \"Thank you for supporting our work.\", \"comment\": \"Thanks for the positive rating. We are glad that you acknowledge our contributions.\\n\\nThank you again for your time and effort in reviewing our paper. We really appreciate it.\"}", "{\"title\": \"Revision uploaded\", \"comment\": \"We thank all the reviewers for their time and effort.\\n\\nWe have responded to each of the reviewers and have uploaded a revised manuscript which addresses the reviews and comments. \\n\\nFurther discussion would be welcomed.\"}", "{\"title\": \"Addressing Reviewer 3 concerns\", \"comment\": \"Thank you for your time and effort of reviewing our paper. Please see our response below.\\n\\n\\\\kappa is an assistant notation to remove the ambiguity of the two \\\\gammas in G_{\\\\gamma}^{q_{\\\\gamma} (y)}. \\\\kappa stands for the parameter/variable of which the gradient information is needed. For example, \\n(i) g_{\\\\kappa}^{q_{\\\\gamma}(y)} = frac{-1}{q_{\\\\gamma}(y)} \\\\nabla_{\\\\kappa} Q_{\\\\gamma}(y)}, where \\\\kappa is \\\\gamma, as in Theorem 1;\\n(ii) g_{\\\\kappa}^{q_{\\\\gamma}(y|\\\\lambda)} = frac{-1}{q_{\\\\gamma}(y|\\\\lambda)} \\\\nabla_{\\\\kappa} Q_{\\\\gamma}(y |\\\\lambda), where \\\\kappa could be \\\\gamma or \\\\lambda. \\n\\nEqs. (7) and (8) are the foundations GO is built on, but they are not our GO. GO is defined in Eq. (9) of Theorem 1.\\nFor Eq. (9), yes, y_{-v} is selected from one sample y in the experiments. But GO is not the local expectation gradient (Titsias & Lazaro-Gredilla, 2015), because GO uses different information (the derivative of the CDF and the difference of the expected function). As pointed out in the last paragraph of Sec. 3, when y_v has finite support and the computational cost is acceptable, one could use the local idea from Titsias & Lazaro-Gredilla(2015) for lower variance, namely analytically evaluate a part of expectations in Eq. (9). For a detailed example, please refer to Appendix I. The main difference between the local expectation gradient and the proposed GO is that the latter is applicable to where the former might not be applicable, such as where y_v has infinite support or the computational cost for the local expectation is prohibitive.\\n\\nPlease note our GO is defined in Eq. (9). As pointed out in the last paragraph of Sec. 3, calculating Dy[f(y)] (requiring V+1 f evaluations) could be computationally expensive. We also stated there, \\u201cfor f(y) often used in practice special properties hold that can be exploited for ef\\ufb01cient parallel computing\\u201d. We took the VAE experiment in Sec 7.2 as an example and gave in Appendix I its detailed analysis/implementation, in which you might be interested. More specifically, the two bullets after Table 4, should be able to address your question on fast speed. Also, as noted in the penultimate paragraph of Sec. 7.2, less parameters (without neural-network-parameterized control variant) could be another reason for GO\\u2019s efficiency. \\n \\nAs for computation complexity, since different random variables (RVs) have different variable-nabla (as shown in Table 3 in Appendix), GO has different computation complexity for different RVs. After choosing a specific RV, one should be able to obtain GO\\u2019s computation complexity straightforwardly. For quantitative evaluation, the running time for each experiment has been given in the corresponding Appendix. Please check there if interested.\\n\\nThank you for pointing out the concern on multi-sample-based REINFORCE. We have added another curve labeled REINFORCE2 to the one-dimensional NB experiments (see Fig. 8 for complete results), where the number 2 means using 2 samples to estimate the REINFORCE gradient. In this case, REINFORCE2 uses 2 samples and 2 f evaluations in each iteration, whereas GO uses 1 sample and 2 f evaluations. As expected, REINFORCE2 still exhibits higher variance than GO even in this simple one-dimensional setting. Multi-sample-based REINFORCE for other experiments is believed unnecessary, because (i) the variance of REINFORCE is well-known to increase with dimensionality; (ii) after all, if multi-sample-based REINFORCE works well in practice, why we need variance-reduction techniques?\\n\\nPlease refer to Sec. 7.2 and Appendix I, the author released code from Grathwohl(2017) (github.com/duvenaud/relax) were run to obtain the results of REBAR and RELAX. We adopted the same hyperparameter settings therein for our GO. So, we do not think the hyperparameter settings favor our GO in the reported experiments. \\nPlease refer to the first paragraph of Sec. 7.2, \\u201cSince the statistical back-propagation in Theorem 3 cannot handle discrete internal variables, we focus on the single-latent-layer settings (1 layer of 200 Bernoulli random variables).\\u201d\\nIf you are interested, as stated in the last paragraph of Sec 7.2, we presented in Appendix B.4 a procedure to assist our methods in handling discrete internal RVs. We believe that procedure might be useful for the inference of models with discrete internal RVs (like the multi-layer discrete VAE).\\n\\nPlease refer to the last paragraph of Appendix I, where we explained this misunderstanding in detail. In short, GO does not suffer more from overfitting; one reason is GO can provide higher validation ELBO. Actually, we believe it is GO\\u2019s efficiency that causes this misunderstanding. \\n\\nWe hope your concerns have been addressed. If not, further discussion would be welcomed.\"}", "{\"title\": \"Addressing Reviewer 1 concerns\", \"comment\": \"We appreciate your time and effort of reviewing our paper, and thank you for the insightful and constructive comments.\\n\\nFor simplicity of the main paper, we moved all the detailed proofs to the Appendix. More specifically, the proofs for Theorem 1, Lemma 1, Theorem 2, Corollary 1, and Theorem 3 are given in Appendix A, C, D, E, and F, respectively.\\n\\nThanks a lot for pointing out the smoothness conditions for reparameterization; we have carefully revised our paper to remove the misleading statements and to make it clearer when our method (and also the reparameterization trick, Rep) is applicable. For your comments wrt discrete random variables (RVs), unfortunately, we haven\\u2019t found a principled way to back-propagate gradient through discrete internal RVs (like in multi-layer sigmoid belief networks). However, as stated in the last paragraph of Sec. 7.2, we presented in Appendix B.4 a procedure to assist our methods in handling discrete internal RVs. We believe that procedure could be useful for the inference of models like the multi-layer sigmoid belief networks. As for the conditional independency, it is actually removed after marginalizing out additional continuous RVs (which could be non-reparameterizable RVs like Gamma). Also note that one can strengthen the aforementioned procedure by inserting more additional continuous internal RVs into the inference model to enlarge its (marginal) description power.\\n\\nThe notations are chosen for harmony and also to keep consistency with the main literature. For example, one can add another expectation wrt the true data distribution q(x) to the ELBO in Eq. (1), that is, E_{q(x)} [ELBO] = E_{q(x) q(z|x)} [log p(x,z) \\u2013 log q(z|x)] \\\\propto - KL[q(x)q(z|x) || p(x,z)].\\n\\nFor dropout, since the dropout rate is a tunable hyperparameter that need not be learned (thus no back-propagation is required), one can use Rep to construct the q distribution you defined. If we understand correctly, in that case we cannot demonstrate our advantages. Currently, the proposed method cannot be directly applied to multi-layer sigmoid belief networks (without the procedure in Appendix B.4). We have made an explicit statement of this in the revised manuscript.\\n\\nThank you for pointing this out. However, it\\u2019s believed that Rep cannot be applied to Gamma distributions [1,2]. We have revised our statement to \\u201cThere are situations for which Rep is not readily applicable, e.g., where the components of y may be discrete or nonnegative Gamma distributed\\u201d.\\n[1] F. Ruiz, M. Titsias, and D. Blei. The generalized reparameterization gradient. In NIPS, pp. 460\\u2013468, 2016.\\n[2] C. Naesseth, F. Ruiz, S. Linderman, and D. Blei. Rejection sampling variational inference. arXiv:1610.05683, 2016.\\n\\nYes, Lemma 1 shows that our deep GO will reduce to Rep when Rep is applicable. We are not sure whether you were asking about the difference in Fig. 1 or Fig. 2. So, two responses are given below.\\n(A) In Fig. 1, the difference comes from the definition of node y^(i). For deterministic deep neural networks, node y^(i) is the activated value after an activation function, where deterministic chain rule can be readily applied; while for deep GO gradient, node y^(i) might be the sample of a non-reparameterizable RV, where deterministic chain rule is not applicable. Please also refer to the main contribution (ii) of our response to Reviewer 2.\\n(B) If you were interested in the difference in Fig. 2 (a)(b), the reasons include (1) the standard Rep cannot be applied to Gamma RVs; (2) both GRep and RSVI are designed to approximately reparametrize Gamma RVs; (3) GO generalizes Rep to non-reparameterizable RVs; or in other words, GO is identical to the exact Rep for Gamma RVs.\\n\\nYes, the sticking approach was implicitly adopted for all the compared methods when it is applicable. We have made a clear statement in the revised paper.\\n\\nSince stochastic computation graph (SCG) is based on REINFORCE and our method is based on GO, the comparison between SCG and our method is (roughly speaking) identical to that between REINFORCE and GO. That is, SCG is more generally applicable but with higher variance; the proposed method has less generalizability but with much lower variance. We have added the following discussion into Related Work.\\n\\u201c\\u2026as the Rep gradient (Grathwohl et al., 2017). SCG (Schulman et al., 2015) utilizes the generalizability of REINFORCE to construct widely-applicable stochastic computation graphs. However, REINFORCE is known to have high variance, especially for high-dimensional problems, where the proposed methods are preferable when applicable (Schulman et al., 2015). Stochastic back-propagation\\u2026\\u201d\\n\\nThank you for pointing out these fundamental conditions, which we have added into the revised manuscript.\\n\\nWe hope your concerns have been addressed. If not, further discussion would be welcomed.\"}", "{\"title\": \"Addressing Reviewer 2 concerns\", \"comment\": \"Thank you for your time and effort of reviewing our paper. Please see our response below.\", \"our_main_contributions_include\": \"(i) For single-layer random variables (RVs), we propose a unified gradient named GO by exploiting the integration-by-parts idea, which is applicable to continuous/discrete RVs. In the special case of single-layer continuous RVs where GO recovers Implicit Rep or pathwise gradients, we consider it\\u2019s our contribution to provide a principled explanation (via integration-by-parts) why Implicit Rep and pathwise gradients have low Monte Carlo variance; or in other words, we prove that their implicit differentiation originates from integration-by-parts.\\n\\n(ii) For multi-layer RVs, our main contribution is the discovery that with GO (or in other words, the introduced variable-nabla), one can back-propagate gradient information through a nested combination of nonlinear functions and general RVs (including non-reparameterizable continuous RVs, back-propagating through which is challenging). Another interpretation of this contribution is that GO enables generalizing the deterministic chain rule to a statistical version. Here, we refer to deterministic chain rule as back-propagating gradient through deterministic functions (like neural networks) or reparameterizable RVs (like Gaussian). By contrast, statistical chain rule is referred to as back-propagating gradient through more general RVs (including non-reparameterizable ones). Of course, statistical chain rule recovers deterministic chain rule for deterministic functions and reparameterizable RVs, because GO recovers the standard Rep.\\n\\n(iii) Another 2 minor contributions include Lemma 1 and Corollary 1. In Lemma 1, we explicitly prove that our deep GO gradient contains the standard Rep as a special case, in general beyond Gaussian. Note neither Implicit Rep nor pathwise gradients can recover Rep in general, because a neural-network-parameterized reparameterization usually leads to a nontrivial CDF. In Corollary 1, we reveal the fact that the proposed method degrades into the classical back-propagation algorithm under specific settings.\\n\\nFinally, we believe it is interesting to create a consistent architecture, which unifies (a) a GO gradient which contains many popular gradients as special cases, and (b) a more general statistical chain rule developed based on GO which recovers the well-known deterministic chain rule under specific cases.\\n\\nFor your comments not addressed above, please see our additional response below.\\n\\n(1) We have made clearer the relationships among the standard Rep, Implicit Rep/pathwise, and our GO in the revised manuscript. In the revised paper we have explicitly pointed out that the experiments from (Figurnov et al. 2018, Jankowiack & Obermeyer,2018) additionally support our GO in the special case of single-layer continuous RVs.\\n\\n(2) Please refer to our main contributions summarized above, where other contributions, beyond GO for discrete RVs, are clarified.\\n\\n(3) Please refer to our main contributions (ii)-(iii). As stated in our paper, many works tried to solve the problem of stochastic/statistical back-propagation. We consider our contributions in Secs. 4 and 5 as one step toward that final goal. Please note that what\\u2019s done in Secs. 4 and 5 is not straight-forward and has not been reported before. Since stochastic back-propagation (Rezende et al., 2014; Fan et al., 2015) focuses mainly on reparameterizable RVs, deterministic chain rule as mentioned in main contribution (ii) can be readily applied. By contrast, we target towards more general situations in Secs. 4 and 5 where deterministic chain rule might not be applicable, such as for non-parameterizable (continuous) RVs. We prove that one can utilize our GO to sequentially back-propagate gradient though non-parameterizable continuous RVs, namely the statistical chain rule mentioned in main contribution (ii). \\n\\nWe have revised the last paragraph of the Introduction to make a more explicit summation of our main contributions, as mentioned above.\\n\\nWe hope your concerns have been addressed. If not, further discussion would be welcomed.\"}", "{\"title\": \"Ambitious paper addressing a relevant problem. But not clear the novel contributions. High overlap with previous papers.\", \"review\": \"This paper presents a gradient estimator for expectation-based objectives, which is called Go-gradient. This estimator is unbiased, has low variance and, in contrast to other previous approaches, applies to either continuous and discrete random variables. They also extend this estimator to problems where the gradient should be \\\"backpropagated\\\" through a nested combination of random variables and a (non-linear) functions. Authors present an extensive experimental evaluation of the estimator on different challenging machine learning problems.\\n\\n\\nThe paper addresses a relevant problem which appears in many machine learning settings, as it is the problem of estimating the gradient of an expectation-based objective. In general, the paper is well written and easy to follow. And the experimental evaluation is extensive and compares with relevant state-of-the-art methods. \\n\\nThe main problem with this paper is that it is difficult to identify its main and novel contributions. \\n\\n1. In the case of continuous random variables, Go-gradient is equal to Implicit Rep gradients (Figurnov et al. 2018) and pathwise gradients (Jankowiack & Obermeyer,2018). Furthermore, for the Gaussian case, Implicit Rep gradients (and Go-gradient too) are equal to the standard reparametrization trick estimator (Kingma & Welling, 2014). This should be made crystal-clear in the paper. What happens is that the authors arrive at this solution using a different approach. \\n\\nIn this sense, claims about the low-variance of GO-gradient wrt to other reparametrization baed estimators should be removed, as they are the same. Moreover, I don't think some of the presented experiments are necessary. Simply because for continuous variables similar experiments have been reported before (Figurnov et al. 2018, Jankowiack & Obermeyer,2018). \\n\\n2. It seems that the main novel contribution of the paper is to extend the ideas of (Figurnov et al. 2018, Jankowiack & Obermeyer,2018) to discrete variables. And this is a relevant contribution. And the experimental evaluations of this part are convincing and compare favourably with other state-of-the-art methods. \\n\\n3. Authors should be much more clear about which is their original contribution to the problems stated in Section 4 and Section 5. As authors acknowledge in Section 6. <<Stochastic back-propagation (Rezende et al., 2014; Fan et al., 2015), focusing mainly on re-parameterizable Gaussian random variables and deep latent Gaussian models, exploits the product rule for an integral to derive gradient backpropagation through several continuous random variables.>> This is exactly what authors do in these sections. Again it seems that the real contribution of this paper here is to extend this stochastic back-propagation (Rezende et al., 2014; Fan et al., 2015) ideas to discrete variables. Although this extension seems to be easily derived using the contributions made at point 2. \\n\\nSummarizing, the paper addresses a relevant problem but they do not state which their main contributions are, and reintroduce some ideas previously published in the literature.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"A solid contribution with some presentation issues: scope of applicability, clarity, technical correctness\", \"review\": \"* Summary\\n\\nThe paper proposes an improved method for computing derivatives of the expectation. Such problems arises with many probabilistic models with noises or latent variables. The paper proposes a new gradient estimator of low variance applicable in certain scenarios, in particular it allows training of generative models in which observations and/or latent variables are discrete. \\nThe submission clearly improves the state-of-the-art, experimentally demonstrates the method on several problems comparing with the alternative techniques. In what concerns the optimization, the method achieves a better objective value much faster, confirming that it is a lower variance gradient estimator. \\nThe clarity of the presentation (in particular the description of when the method is applicable) and the technical correctness of the paper are somewhat lacking. In terms of applicability, it seems that many cases where discrete latent variables would be really interesting are not covered (e.g. sigmoid belief networks); the paper demonstrates experiments with discrete images (binary or 4-bit) not particularly motivated in my opinion. It also contains lots of additional technical details and experiments in the appendix, which I unfortunately did not review.\\n\\n* Clarity\\n\\nIn the abstract the paper promises more than it delivers. Many problems can be cast as optimizing an expectation-based objective. The result does not at all apply to all of them. The reparameterization trick does not apply to all continuous random variables, only to such that the reparameterization satisfies certain smoothness conditions. Discrete variables are supported by the method only in the case that the distribution factors over all discrete variables conditionally on any additional \\u201ccontinuous variables\\u201d (to which the reparameterization trick is applicable). This very much limits the utility of the method. In particular it is not applicable to learning e.g. sigmoid belief networks [Neal, 92] (with conditional Bernoulli units) and many other problems. \\n\\n\\u201creparametrizable distributions\\u201d\\nA Bernoulli(p) random variable is discrete, yet it is reparametrizable as [Z>p] with Z following standard logistic distribution, whose density and cdf is smooth. \\n\\nBecause of the above many discussions about discrete vs. continuous variables are missleading.\\n\\nSection 2. The notation of the true distribution as \\u201cq\\u201d the model as p and the approximate posterior of the model as \\u201cq\\u201d again is inconsistent. I find the background on ELBO and GANs unnecessary occluding the clarity at this point. For the purpose of introduction, it might be better to give examples of expectation objectives such as: \\n- dropout: q is the distribution of NN outputs given the input image and integrating out latent dropout noises, gamma are parameters of this NN.\\n- VAE, GAN: q is the generative model defined as a mapping of a standard multivariate normal distribution by a NN.\\n- sigmoid belief networks: q is a Bayesian network where each conditional distribution is a logistic regression model.\\nThen to state to which of these cases the results of the paper are applicable, allow for an improvement of the variance and at what additional computational cost (considering the cost of evaluating the discrete derivatives).\\n\\nSection 3.\\nContrary to the discussion, there are examples of non-negative distributions to which the reparameterization trick can be applied, including log-Normal and Gamma distributions.\", \"method\": \"In the case when Rep trick is applicable, is it identical to GO? The difference seems to be only in that the mapping tau may be different from Q^-1. However, this only affects the method of drawing the samples from a fixed known distribution and should have no more effect on the results than say a choice of a pseudo-random number generator. Yet, in Fig.1 some difference is observed between the methods, why is that so?\\n\\nSec 7.1\\n\\u201cWe adopt the sticking approach hereafter\\u201d. Does it mean it is applied with all experiments with GO?\\n\\n* Related Work\", \"the_state_of_the_art_allows_combining_differentiable_and_non_differentiable_pieces_of_computation\": \"[Schulman, J., Heess, N., Weber, T., Abbeel, P.: Gradient estimation using stochastic computation graphs.]\\nI believe it should be discussed in related work. Limitations / where the proposed method brings an improvement should be highlighted.\\n\\n* Technical Correctness\\nEquations (5) and (6) require a theorem of differentiating under integral (expectation), such as Leibnitz rule, which in case of (6) requires q_gamma(y)f(y) to be continuous in y and q_gamma(y) continuously differentiable in gamma.\\nEquation (7) (integration by parts) holds only with some additional requires on f.\\nTheorem 1 does not take account for the above conditions.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Reasonable methods but some unclear points\", \"review\": \"The paper design a low variance gradient for distributions associated with continuous or discrete random variables. The gradient is designed in the way to approximate the property of reparameterization gradient. The paper is comprehensive and includes mathematical details.\\n\\nI have following comments/questions\\n\\n1. What is the \\\\kappa in \\u201cvariable-nabla\\u201d stands for? What is the gradient w.r.t. \\\\kappa?\\n\\n2. In Eq(8), does the outer expectation w.r.t . y_{-v} be approximated by one sample? If so, it is using the local expectation method. How does that differs from Titsias & Lazaro-Gredilla(2015) both mathematically and experimentally? \\n\\n3. Assume y_v is M-way categorical distribution, Eq(8) evaluates f by 2*V*M times which can be computationally expensive. What is the computation complexity of GO? How to explain the fast speed shown in the experiments?\\n\\n4. A most simple way to reduce the variance of REINFORCE gradient is to take multiple Monte-Carlo samples at the cost of more computation with multiple function f evaluations. Assume GO gradient needs to evaluate f N times, how does the performance compared with the REINFORCE gradient with N Monte-Carlo samples? \\n\\n5. In the discrete VAE experiment, upon brief checking the results in Grathwohl(2017), it shows validation ELBO for MNIST as (114.32,111.12), OMNIGLOT as (122.11,128.20) from which two cases are better than GO. Does the hyper parameter setting favor the GO gradient in the reported experiments? Error bar may also be needed for comparison. What about the performance of GO gradient in the 2 stochastic layer setting in Grathwohl(2017)?\\n\\n6. The paper claims GO has less parameters than REBAR/RELAX. But in Figure 9, GO has more severe overfitting. How to explain this contradicts between the model complexity and overfitting?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Response to Questions\", \"comment\": \"Thanks for your interest. Your comments are addressed below.\\n\\n(1) Similar to GO, Implicit Reparameterization (ImplicitRep) Gradients (Jankowiak & Obermeyer 2018, Figurnov et al. 2018) tried to exploit the gradient information of function f(z) for lower Monte Carlo variance, via a technique they termed implicit differentiation. Although seeming different, ImplicitRep is more or less a special case of GO in the single-layer continuous situation (thus no need for comparison). One can reveal this by comparing their Eq. (5) with our Eq. (9) in Theorem 1. The difference is that GO generalizes to discrete situations (Theorem 1), and also to deep probabilistic graphical models (Theorem 2 and 3). \\n\\n(2) As stated in the paragraph before Section 4, we adopt the local expectation idea when it is applicable and computationally acceptable. In some specific cases, like discrete random variables with finite support, fully applying the local expectation idea will reduce GO to the LEgrad. However, GO has the advantages that it is applicable to discrete situations with (1) infinite support (where LEgrad may not be applicable); (2) finite support (where LEgrad may be computationally expensive).\\n\\n(3) Thank you for your suggestions. We plan to fully exploit (and potentially improve) GO under various (discrete) cases in the future. However, we consider it beyond the scope of this conference paper, which is meant for presenting the derivation of a unified gradient that is widely applicable. \\n\\n(4) ARM (Yin 2018), using techniques (including data augmentation, permutation, and variance reduction) to aid REINFORCE for gradient calculation, is applicable to discrete situations with finite support. By comparison, GO, motivated by the connection of REINFORCE and Rep, is (1) a widely applicable gradient (continuous or discrete); (2) can be applied to discrete situations with infinite support. There might be some implicit relations between ARM and GO. We leave that as future work.\"}", "{\"comment\": \"1) Implicit reparameterization gradients (Jankowiak & Obermeyer 2018, Figurnov et al. 2018) already show improvements over GRep and RSVI, so it would seem natural to use them as the baseline for Sec 7.1. In this setting, what is the relationship between GO and Implicit Reparameterization Gradients.\\n\\n2) In Sec 7.2, GO gradients require evaluating f many times. It would seem natural to compare to Local Expectation gradients in this case. What is the relationship between GO and LEgrad in this case?\\n\\n3) For the discrete case, because we are making many calls to f, would it make sense to compare to multisample techniques (e.g., VIMCO)?\\n\\n4) ARM (Yin 2018) is a recent technique for discrete random variables that uses multiple function evals. What is the relation with GO gradients?\", \"title\": \"Questions\"}" ] }
HJz6tiCqYm
Benchmarking Neural Network Robustness to Common Corruptions and Perturbations
[ "Dan Hendrycks", "Thomas Dietterich" ]
In this paper we establish rigorous benchmarks for image classifier robustness. Our first benchmark, ImageNet-C, standardizes and expands the corruption robustness topic, while showing which classifiers are preferable in safety-critical applications. Then we propose a new dataset called ImageNet-P which enables researchers to benchmark a classifier's robustness to common perturbations. Unlike recent robustness research, this benchmark evaluates performance on common corruptions and perturbations not worst-case adversarial perturbations. We find that there are negligible changes in relative corruption robustness from AlexNet classifiers to ResNet classifiers. Afterward we discover ways to enhance corruption and perturbation robustness. We even find that a bypassed adversarial defense provides substantial common perturbation robustness. Together our benchmarks may aid future work toward networks that robustly generalize.
[ "robustness", "benchmark", "convnets", "perturbations" ]
https://openreview.net/pdf?id=HJz6tiCqYm
https://openreview.net/forum?id=HJz6tiCqYm
ICLR.cc/2019/Conference
2019
{ "note_id": [ "BygwXHNOlV", "ryxahIfEgE", "HylJlCQklN", "SJefbjX1e4", "S1l6EPphkE", "rylvrBOmkV", "SJladTB7y4", "rJxW-nzJyN", "rklBbUGA0m", "rJxnuikRA7", "SJgvBRYaR7", "rye6cKP5Cm", "Skg0XKPqR7", "BkxNgFD9RX", "r1xJYOvcR7", "B1xEWdwc0Q", "B1xfnvvc0m", "SklrTPkU07", "rkl_eOltTX", "S1xvCfDD6X", "HkgLx3DzpQ", "ryeoWVTch7" ], "note_type": [ "official_comment", "comment", "meta_review", "official_comment", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_review", "official_review", "comment", "official_review" ], "note_created": [ 1545254175504, 1544984245371, 1544662502855, 1544661753617, 1544505141050, 1543894334824, 1543884149097, 1543609337334, 1543542268656, 1543531380131, 1543507519447, 1543301524767, 1543301414006, 1543301356445, 1543301238843, 1543301116031, 1543301034379, 1543006141272, 1542158320491, 1542054606709, 1541729261518, 1541227522752 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper489/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper489/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper489/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper489/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper489/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper489/Authors" ], [ "ICLR.cc/2019/Conference/Paper489/Authors" ], [ "(anonymous)" ], [ "~Dogancan_Temel1" ], [ "ICLR.cc/2019/Conference/Paper489/Authors" ], [ "ICLR.cc/2019/Conference/Paper489/Authors" ], [ "ICLR.cc/2019/Conference/Paper489/Authors" ], [ "ICLR.cc/2019/Conference/Paper489/Authors" ], [ "ICLR.cc/2019/Conference/Paper489/Authors" ], [ "ICLR.cc/2019/Conference/Paper489/Authors" ], [ "~Dogancan_Temel1" ], [ "ICLR.cc/2019/Conference/Paper489/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper489/AnonReviewer3" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper489/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Reply\", \"comment\": \"Thank you for your interest! Since this task is not adversarial in nature, we do not intend to continually modify the corruptions to subvert new approaches, much like how CIFAR-10 did not continually change to make classification harder for every new architecture and method. Improved generalization to unseen corruptions suggests improved corruption robustness. However if necessary we are open to updating the benchmark, but we will first see whether the research community experiments in this setting.\"}", "{\"comment\": \"Hi, it\\u2019s an interesting work!\\n\\nI would like to ask the authors how to ensure that the benchmarks are sufficiently representative to evaluate the robustness of models.\\n\\nWill the benchmarks be updated in the future as new adversarial attacks (Corruptions or Perturbations) emerge?\", \"title\": \"Question about the Representativity and the Time-Efficiency of the benchmarks\"}", "{\"metareview\": \"The reviewers have all recommended accepting this paper thus I am as well. Based on the reviews and the selectivity of the single track for oral presentations, I am only recommending acceptance as a poster.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"clear consensus to accept this paper\"}", "{\"title\": \"Unclear to me what these papers add over currently cited papers\", \"comment\": \"This work already cites many previous fragility studies, both from robustness to random corruptions/perturbations and with respect to worst-case corruptions, some works which include robustness to translations. Based on a quick reading of the proposed additional citations it is unclear to me what these works add on top of what is already cited. I have no strong opinion either way whether additional citations are added, I leave it up to the authors or other reviewers to decide what is best for the proper context of this work.\"}", "{\"title\": \"Citations\", \"comment\": \"Please excuse the delayed response, as we were at NeurIPS.\\n\\nThe original poster sent an e-mail many months ago including numerous links to many papers, including several of their own. We conclude this because we received only one e-mail with citation suggestions. In consequence, we cited two of the papers authored by the person sending the e-mail, giving a sentence description for each citation. Several months later, the email sender posted the comment above. The only link which appeared in both the e-mail and in the comment above is Engstrom et al. (which is under review). The Fawzi et al. and Kanbak et al. papers are new to us. These may be added to the \\\"ConvNet Fragility Studies\\\" section. We think it is a reasonable suggestion to spend more time discussing other ConvNet perturbation fragility findings, although we do already cite works which mention translation instability (such as the parallel work of Azulay & Weiss, 2018).\"}", "{\"comment\": \"If I am interpreting the comment correctly, the author seems to be saying that they have cited the sender of the email's other papers, but does not see the need to cite any of the papers listed above.\\n\\nThis is a bit confusing as our comments are not an attempt to \\\"extort\\\" citations, but rather an effort to put this work in the right context. The fact that the authors cite other (admittedly less relevant) papers of the email sender does not render the suggested work less relevant.\\n\\nTo reiterate, I believe that all these works are very relevant to the subject of the above paper. If the authors do not want to cite these papers, that is okay - however one would expect them to at least explain in OpenReview why or give a brief comparison.\", \"title\": \"response\"}", "{\"title\": \"general approach to anonymous remarks about missing citations\", \"comment\": \"It sounds as if the authors agree with the suggestion, although I am not completely sure. However, I would like to emphasize that if they did not agree, then it would be up to the reviewers to determine whether adding these citations was important. Without investigating further, I have no position either way.\\n\\nBut, in general, our obligation as scientists is to cite other work when doing so benefits the reader. We should exercise our own taste in what we cite and avoid citing things that we do not think enhance the experience of the reader.\", \"authors\": \"please don't hesitate to ask reviewers+AC to weigh-in if you are even in doubt about the importance of adding a particular citation.\"}", "{\"title\": \"Data Augmentation with Stylized ImageNet Improves Corruption Robustness\", \"comment\": \"A parallel submission proposes to train classifiers on stylized ImageNet images. The aim is to make classifiers rely less on texture and more on shape. https://openreview.net/pdf?id=Bygh9j09KX\\n\\nWe have found that this method indeed improves corruption robustness. A ResNet-50 obtains an mCE of 76.70%, while a ResNet-50 trained on both ImageNet images and stylized ImageNet images has an mCE of 69.32% (with general improvements noise, blur, weather, and digital categories).\"}", "{\"title\": \"Revised Discussion\", \"comment\": \"We would be happy to expand the related works further in future revisions of this draft. We have cited the sender of the e-mail from \\\"a long time ago\\\" twice in the current draft, but we can add more in a future revision.\"}", "{\"comment\": \"I would like to point out that this submission is missing a discussion of some very relevant prior work. That work already evaluates the robustness of ML classifiers to naturally occurring transformations such as rotations and translations. Specifically:\\n\\n\\u2022 Fawzi et al. (2015) [https://arxiv.org/abs/1507.06535] compute the minimum transformation (composed of rotations, translations, scaling, etc.) needed to cause a misclassification for a wide variety of models. They find that it is relatively small, and make several observations about the relative robustness of different classifiers.\\n\\n\\u2022 Engstrom et al. (2017) [https://arxiv.org/abs/1712.02779] fix a range of rotations and translations and compute that worst-case accuracy of models over this space. They also find models to be relatively non-robust and propose methods for improving it.\\n\\n\\u2022 Kanbak et al. (2018) [https://arxiv.org/abs/1711.09115] develop a first-order method to find such worst-case transformations fast. They show that this method can then be used to perform adversarial training and improve the model's robustness.\\n\\n```The authors were already notified about existence of some of this prior work a long time ago, but still seem to dismiss it.\", \"title\": \"Discussion of prior work missing\"}", "{\"comment\": \"Thanks for the quick response. Also, I really appreciated the additional section at the end of the paper where you talk about the robustness enhancement attempts, it is good to know not just what worked but also what did not work and why.\", \"title\": \"Thanks\"}", "{\"title\": \"Reviewer 1 Reply\", \"comment\": \"We thank you for your careful analysis of our paper.\\n\\n\\u201cQuestion: Why do authors do not recommend training on the new datasets?\\u201d\\nWe do not suggest this as the datasets are corrupted or perturbed forms of clean ImageNet validation images, and that training on these specific corruptions would no longer provide a test of generalization ability to novel forms of corruptions. Researchers could train on various other corruptions, such as film grain, adversarial noise, HSV noise, uniform noise, high-pass filtering, median blur, spherical camera distortions, pincushion distortions, out-of-distribution object occlusions, stylized images ( https://openreview.net/forum?id=Bygh9j09KX ), lens scratches, image quilting, color quantization, etc.\\n\\n\\u201cAre there other useful adversarial defenses?\\u201d\\nDifferent adversarial training schemes can degrade accuracy so much that they performed worse on these benchmarks. Many other adversarial defenses which do not use train on adversarial or benign noise have been shown not to provide robustness on noise corruptions (see the thorough work of https://openreview.net/pdf?id=S1xoy3CcYX Figure 3). In the coming month, we intend to explore more combinations of techniques to increase robustness, such as the combinations you suggest. In the appendix we explicate four attempts which did not lead to added robustness.\"}", "{\"title\": \"Augmentation Clarification\", \"comment\": \"Noises such as those from gradients or uniform noise are perfectly acceptable forms of augmentation for this task. In the stability training experiment, we observed only minor gains in perturbation robustness when training with uniform noise, but perhaps training with more severe uniform noise could improve corruption robustness. In the revised paper, we make it clearer that training with other forms of data augmentation is acceptable. Please forgive this confusion.\"}", "{\"title\": \"Reviewer 3 Reply\", \"comment\": \"Thank you for your interest in this topic and your analysis of our paper.\\n\\n\\u201cI think it might be more realistic to allow training on a subset of the corruptions.\\u201d\\nResearchers could train on various other corruptions, such as film grain, adversarial noise, HSV noise, uniform noise,\\nhigh-pass filtering, median blur, spherical camera distortions, pincushion distortions, out-of-distribution object occlusions, stylized images ( https://openreview.net/forum?id=Bygh9j09KX ), lens scratches, image quilting, color quantization, etc. We have updated the text to make it clearer that researchers can train on more than just cropped and flipped images, but we still do not want researchers training on the test corruptions. In the paper we experimented with uniform noise data augmentation in the stability training experiment and found minor perturbation robustness gains, but not with Gaussian noise with a large standard deviation.\\n\\nThank you for pointing out that the brief Stone comment requires much more context. For that reason we have removed the citation. Essentially, if f is a model and f^\\\\hat is an approximation, and if input x is d-dimensional, then if we want | f(x) - f^\\\\hat (x) | < epsilon, then in some scenarios the number of samples necessary is ~ epsilon^{-d}. Other context is on slide 10 of https://github.com/joanbruna/MathsDL-spring18/blob/master/lectures/lecture1.pdf\\n\\n\\u201cl infinity perturbations on small images\\u201d\\nThanks to your suggestion, we have changed this to \\u201cperturbations on small images.\\u201d We kept the word \\u201csmall\\u201d as the images often have side length 32 pixels. We removed \\u201cl_infinity\\u201d since that method has had some success for perturbations which are small in an l_2 sense.\"}", "{\"title\": \"Reviewer 2 Reply\", \"comment\": \"We thank you for taking time to review our work.\"}", "{\"title\": \"Cited Work\", \"comment\": \"Thank you for your interest in this topic and making us aware of your work. An earlier draft of our work appeared months before the time of the ICLR submission deadline, and we have added all citations to your traffic sign recognition work and your parallel works.\"}", "{\"title\": \"Minor Revision Posted\", \"comment\": \"We should like to thank all of the reviewers and commenters for their constructive comments and kind reception. Independent from their comments, we have created CIFAR-10-C and CIFAR-10-P which could be adequate for rapid experimentation. Also in the revised version is a new appendix where we briefly analyze a different notion of robustness separate from our main contributions. We will respond to each reviewer\\u2019s comments individually.\"}", "{\"comment\": \"I would like to thank the authors for focusing on such a critical issue in a comprehensive manner. Algorithmic solutions behind the core technologies have to be robust even under challenging conditions in order for such technologies to be effective and useful in our daily lives. With more and more studies similar to the submitted ICLR work, we can identify the weaknesses and strengths of existing algorithms to develop more reliable perception systems. One of the main contributions of the submitted work is based on the common corruptions and perturbations not worst-case adversarial perturbations. With a similar mindset, we have introduced three datasets, two for traffic signs (CURE-TSR [2], CURE-TSD [3]) and one for generic objects (CURE-OR [1]) to investigate the robustness of recognition/detection systems under challenging conditions corresponding to adversaries that can naturally occur in real-world environments and systems. The controlled challenging conditions in the CURE-OR [1] dataset include underexposure, overexposure, blur, contrast, dirty lens, image noise, resizing, and loss of color information. And the controlled conditions in the CURE-TSR [2] and CURE-TSD [3] datasets include rain, snow, haze, shadow, underexposure, overexposure, blur, dirtiness, loss of color information, sensor and codec errors. Based on the similarities between introduced datasets and conducted studies, including aforementioned studies in the literature analysis of the submitted paper can be helpful to reflect recent related work. Looking forward to authors\\u2019 upcoming studies, thanks.\\n\\n[1] D. Temel*, J. Lee*, and G. AlRegib, \\u201cCURE-OR: Challenging unreal and real environments for object recognition,\\u201d IEEE International Conference on Machine Learning and Applications, Orlando, Florida, USA, December 2018, (*: equal contribution). https://arxiv.org/abs/1810.08293\\n[2] D. Temel, G. Kwon*, M. Prabhushankar*, and G. AlRegib, \\u201cCURE-TSR: Challenging unreal and real environments for traffic sign recognition,\\u201d Advances in Neural Information Processing Systems (NIPS) Workshop on Machine Learning for Intelligent Transportation Systems, Long Beach, U.S., December 2017, (*: equal contribution).https://arxiv.org/abs/1712.02463\\n[3] D. Temel and G. AlRegib, \\u201cTraffic Signs in the Wild: Highlights from the IEEE Video and Image Processing Cup 2017 Student Competition [SP Competitions],\\u201d in IEEE Signal Processing Magazine, vol. 35, no. 2, pp. 154-161, March 2018.https://arxiv.org/abs/1810.06169\", \"title\": \"Related Work\"}", "{\"title\": \"It is an importance work for deep learning research.\", \"review\": \"This paper introduces two benchmarks for image classifier robustness, ImageNet-C and Image-P. The benchmarks cover two important cases in classifier robustness which are ignored by most current researchers. The authors' evaluations also show that current deep learning methods have wide room for improvement. To our best knowledge, this is the first work that provides systematically a common benchmarks for the deep learning community. The reviewer believes that these two benchmarks can play an important role in the research of image classifier robustness.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"An important benchmark for measuring the robustness of computer vision models\", \"review\": \"This paper introduces new benchmarks for measuring the robustness of computer vision models to various image corruptions. In contrast with the popular notion of \\u201cadversarial robustness\\u201d, instead of measuring robustness to small, worst-case perturbations this benchmark measures robustness in the average case, where the corruptions are larger and more likely to be encountered at deployment time. The first benchmark \\u201cImagenet-C\\u201d consists of 15 commonly occurring image corruptions, ranging from additive noise, simulated weather corruptions, to digital corruptions arising from compression artifacts. Each corruption type has several levels of severity and overall corruption score is measured by improved robustness over a baseline model (in this case AlexNet). The second benchmark \\u201cImagenet-P\\u201d measures the consistency of model predictions in a sequence of slightly perturbed image frames. These image sequences are produced by gradually varying an image corruption (e.g. gradually blurring an image). The stability of model predictions is measured by changes in the order of the top-5 predictions of the model. More stable models should not change their prediction to minute distortions in the image. Extensive experiments are run to benchmark recent architecture developments on this new benchmark. It\\u2019s found that more recent architectures are more robust on this benchmark, although this gained robustness is largely due to the architectures being more accurate overall. Some techniques for increasing model robustness are explored, including a recent adversarial defense \\u201cAdversarial Logit Pairing\\u201d, this method was shown to greatly increase robustness on the proposed benchmark. The authors recommend future work benchmark performance on this suite of common corruptions without training on this corruptions directly, and cite prior work which has found that training on one corruption type typically does not generalize to other corruption types. Thus the benchmark is a method for measuring model performance to \\u201cunknown\\u201d corruptions which should be expected during test time.\\n\\nIn my opinion this is an important contribution which could change how we measure the robustness of our models. Adversarial robustness is a closely related and popular metric but it is extremely difficult to measure and reported values of adversarial robustness are continuously being falsified [1,2,3]. In contrast, this benchmark provides a standardized and computationally tractable benchmark for measuring the robustness of neural networks to image corruptions. The proposed image corruptions are also more realistic, and better model the types of corruptions computer vision models are likely to encounter during deployment. I hope that future papers will consider this benchmark when measuring and improving neural network robustness. It remains to be seen how difficult the proposed benchmark will be, but the authors perform experiments on a number of baselines and show that it is non-trivial and interesting. At a minimum, solving this benchmark is a necessary step towards robust vision classifiers. \\n\\nAlthough I agree with the author\\u2019s recommendation that future works not train on all of the Imagenet-C corruptions, I think it might be more realistic to allow training on a subset of the corruptions. The reason why I mention this is it\\u2019s unclear whether or not adversarial training should be considered as performing data augmentation on some of these corruptions, it certainly is doing some form of data augmentation. Concurrent work [4] has run experiments on a resnet-50 for Imagenet and found that Gaussian data augmentation with large enough sigma (e.g. sigma = .4 when image pixels are on a [0,1] scale) does improve robustness to pepper noise and Gaussian blurring, with improvements comparable to that of adversarial training. Have the authors tried Gaussian data augmentation to see if it improves robustness to the other corruptions? I think this is an important baseline to compare with adversarial training or ALP.\\n\\nFew specific comments/typos:\\n\\nPage 2 \\u201cl infinity perturbations on small images\\u201d\\n\\nThe (Stone, 1982) reference is interesting, but it\\u2019s not clear to me that their main result has implications for adversarial robustness. Can the authors clarify how to map the L_p norm in function space of ||T_n - T(theta) || to the traditional notion of adversarial robustness?\\n\\n1. https://arxiv.org/pdf/1705.07263.pdf\\n2. https://arxiv.org/pdf/1802.00420.pdf\\n3. https://arxiv.org/pdf/1607.04311.pdf\\n4. https://openreview.net/forum?id=S1xoy3CcYX&noteId=BklKxJBF57\", \"rating\": \"9: Top 15% of accepted papers, strong accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"comment\": \"You've shown that ALP performs so well on this benchmark, but ALP performs some form of data augmentation by training on worst-case perturbations. Therefore, its unclear whether or not this satisfies the recommendation that future work not train on the Imagenet-C corruptions. Have you compared the ALP model with simply performing Gaussian data augmentation? Some recent adversarial defense works have reported that Gaussian data augmentation improves small perturbation robustness.\", \"title\": \"Interesting work!\"}", "{\"title\": \"Exciting paper!\", \"review\": \"Summary: This paper observes that a major flaw in common image-classification networks is their lack of robustness to common corruptions and perturbations. The authors develop and publish two variants of the ImageNet validation dataset, one for corruptions and one for perturbations. They then propose metrics for evaluating several common networks on their new datasets and find that robustness has not improved much from AlexNet to ResNet. They do, however, find several ways to improve performance including using larger networks, using ResNeXt, and using adversarial logit pairing.\", \"quality\": \"The datasets and metrics are very thoroughly treated, and are the key contribution of the paper. Some questions: What happens if you combine ResNeXt with ALP or histogram equalization? Or any other combinations? Is ALP equally beneficial across all networks? Are there other useful adversarial defenses?\", \"clarity\": \"The novel validation sets and reasoning for them are well-explained, as are the evaluation metrics. Some explanation of adversarial logit pairing would be welcome, and some intuition (or speculation) as to why it is so effective at improving robustness.\", \"originality\": \"Although adversarial robustness is a relatively popular subject, I am not aware of any other work presenting datasets of corrupted/perturbed images.\", \"significance\": \"The paper highlights a significant weakness in many image-classification networks, provides a benchmark, and identifies ways to improve robustness. It would be improved by more thorough testing, but that is less important than the dataset, metrics and basic benchmarking provided.\", \"question\": \"Why do authors do not recommend training on the new datasets?\", \"rating\": \"9: Top 15% of accepted papers, strong accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
SyxaYsAqY7
Second-Order Adversarial Attack and Certifiable Robustness
[ "Bai Li", "Changyou Chen", "Wenlin Wang", "Lawrence Carin" ]
Adversarial training has been recognized as a strong defense against adversarial attacks. In this paper, we propose a powerful second-order attack method that reduces the accuracy of the defense model by Madry et al. (2017). We demonstrate that adversarial training overfits to the choice of the norm in the sense that it is only robust to the attack used for adversarial training, thus suggesting it has not achieved universal robustness. The effectiveness of our attack method motivates an investigation of provable robustness of a defense model. To this end, we introduce a framework that allows one to obtain a certifiable lower bound on the prediction accuracy against adversarial examples. We conduct experiments to show the effectiveness of our attack method. At the same time, our defense model achieves significant improvements compared to previous works under our proposed attack.
[ "attack", "defense model", "adversarial attack", "effectiveness", "certifiable robustness", "strong defense", "adversarial attacks", "powerful", "accuracy" ]
https://openreview.net/pdf?id=SyxaYsAqY7
https://openreview.net/forum?id=SyxaYsAqY7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "S1eKt3DHgN", "HyxnNpt86Q", "S1lzXzII67", "Syxt0EhChQ", "SyeJa4nC2Q", "H1xCLf502X", "H1xyYFYAnm", "BJeuzz5TnX", "Skx7vgPc27", "r1xf0z-XnQ" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1545071744842, 1541999924511, 1541984794139, 1541485776701, 1541485751305, 1541476950311, 1541474679118, 1541411344426, 1541202010680, 1540719306336 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper488/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper488/Authors" ], [ "ICLR.cc/2019/Conference/Paper488/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper488/Authors" ], [ "ICLR.cc/2019/Conference/Paper488/Authors" ], [ "ICLR.cc/2019/Conference/Paper488/Authors" ], [ "ICLR.cc/2019/Conference/Paper488/Authors" ], [ "ICLR.cc/2019/Conference/Paper488/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper488/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper488/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"The reviewers have agreed this work is not ready for publication at ICLR.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Reject\"}", "{\"title\": \"Response\", \"comment\": \"We thank the reviewer for the thoughtful responses.\\n\\n1) We will emphasize the point that our method broke adversarial learning only when different norms are used in the training and testing in the main sections.\\n\\nWe agree that gradient obfuscation might exist from this perspective. This suggests that adversarial training does not truly achieve adversarial robustness on MNIST, which could be an important implication from our results.\\n\\n2) It is true that we are doing the same task with a different tool, but the increment of the bound is significant instead of ''slightly''. From Figure 4 one can observe that we improve the bound from 0.75 to 1.75 when p_(1)->1 and p_(2)->0. In addition,t he tool we used is totally different from the one used in Matthias et al., and the proof is non-trivial. \\n\\nWe want to clarify that our proposed method is exactly the approach of Mathias et al. (2018) combined with stability trained networks in practice, therefore the comparisons a) and b) are the same. We will add this comparison in the revised version to better illustrate the improvement.\\n\\n3) We are not suggesting that the evaluation is not reliable. In the last rebuttal what we want to suggest is that empirical evaluation overall is not reliable in the sense that empirical evaluation does not lead to any formal proof for the observed robustness. For example, although adversarial training is widely acknowledged as the most successful defense method in the literature, there are already many new attacks (including ours) that reduce its robustness reported in the original paper.\\n\\nWe will add more experiment results for different attacks to validate the robustness of our method.\"}", "{\"title\": \"Response\", \"comment\": \"I appreciate the authors taking the time respond to my comments.\\n\\n1) I understand the point about L_inf adversarial training not resulting in L_2 robustness. However, the way the paper is currently written, the emphasis is placed on SO attacks being able to bypass adversarial training by being more powerful than FO attacks. The difference between the norms used is only mentioned in the intro and the discussion and not at all in the main sections describing your contributions. I would suggest that this nuisance is emphasized in future versions of the paper.\\n\\nNote that the proposed SO attacks are not using gradient information at the current iterate, but are rather using gradient information from a random nearby point. This would explain why they might evade the flat regions caused by threshold filters on MNIST. Given that this is the only case where the proposed SO attacks is more powerful than standard PGD, I still believe that it is an artifact of the particular dataset and norm choice.\\n\\n2) I understand that achieving certifiable robustness to a larger scale of perturbations is challenging. However, I personally view the results of Mathias et al. (2018) as \\\"One can certify robustness to small epsilon values by adding random noise to the input\\\". I view the result of the current paper as \\\"One can do use a different toolkit to certify slightly larger epsilon values when adding random noise to the input\\\". Hence while the approach and analysis are interesting, I find the result incremental.\\n\\nIt is still not clear to me what the importance of stability training is. It would be helpful if the following comparisons were included: a) the certified bound using the approach of Mathias et al. (2018) _both_ for standard and stability-trained networks, b) the certified bound using the proposed approach _both_ for standard and STN. (The two approaches should be compared using the best setting of hyper-parameters for each.)\\n\\n3) Even if the proposed defense is not one of the main contributions of the paper, it is still claimed as a contribution. There are several points in the paper where it is argued that the proposed defense outperforms other state-of-the-art methods and that \\\"... our results are reliable.\\\". If the evaluation happens to indeed be unreliable, then the claim could be misleading. This would be a significant flaw for a paper that cannot be ignored during the review process. If the empirical performance is indeed not a major concern, that it should be removed from the paper.\"}", "{\"title\": \"Author response 2\", \"comment\": \"5) There are indeed many results for robust optimization and certifiable robustness. The main advantages of our method are a) Gao et al. (2016) derives a more general result for optimization over a Wasserstein ball of a nominal distribution. Namkoong et al. also study the robust optimization over a \\\\phi-divergence ball of a nominal distribution. In the literature, the divergence between distribution is barely used to empirically evaluate the strength of adversarial attacks. On the other hand, our analysis utilizes the fact that the Renyi divergence between two equal variances Gaussian distribution is a function of the L_2 norm of the difference of their means to derive a bound that can directly be applied to adversarial attack problem. b) It requires nothing on the structure of the classifiers. c) It is easy to compute. One can obtain the bound by multiple feedforward computations. Wong et al. (2017), for example, requires training an additional network.\\n\\nKolter, J. Zico, and Eric Wong. \\\"Provable defenses against adversarial examples via the convex outer adversarial polytope.\\\" arXiv preprint arXiv:1711.00851 2.4 (2017).\\n\\n6) Yes. If we use Laplacian noise instead of Gaussian noise, one can derive similar results for the L_1 norm, as the Renyi divergence of the Laplace random variables can be expressed as a function of the L_1 difference of their means. Unfortunately, there is no distribution that leads to a bound for L_inf norm.\\n\\n7) \\n\\na) The use of the L_2 norm is quite standard in the literature, for example, see Madry et al. (2017). To see how strong or weak the perturbations are, we showed perturbed examples in Appendix G. We will add more illustrations for CIFAR10. We will also report the signal to noise ratio as we agree it is a reasonable concern.\\n\\nMadry, Aleksander, et al. \\\"Towards deep learning models resistant to adversarial attacks.\\\" arXiv preprint arXiv:1706.06083 (2017).\\n\\nb) We thank the reviewer's great suggestion. We will add results with mixed adversarial examples added during training in the revised version.\\n\\nc) It is because each input has its own L as they have different p_(1) and p_(2). Therefore, it is difficult to illustrate L for a data set. Instead, we set a threshold L_0 ahead and find how many examples have L that surpass this threshold to quantify the robustness with respect to a data set.\\n\\n8) This distribution is generated by counting how many times our classifier gives a certain result when we run the feedforward procedure multiple times. Note each time it may give different results as we added noise at the beginning. After this, we can form a histogram, from which we calculate p_(1) and p_(2). The softmax is just the quantity we use to determine the class at each run. Please see step 1-5 in Algorithm 1.\\n\\nThis is also related to point 4 where we mention estimating the confidence interval for p_(1) and p_(2) is just estimating the confidence interval for a multinomial distribution. From the distribution generating procedure above, it is clear that p_j's obey a multinomial distribution.\\n\\nWe will fix the typo and cite relevant papers in the revised version.\"}", "{\"title\": \"Author's response 1\", \"comment\": \"We thank the reviewer for the valuable comments. We respond to the questions and concerns in the following:\\n\\n1) \\n\\na) Our conclusion that improving robustness against to Gaussian random noise helps improve robustness against adversarial robustness is consistent with the results from Fawzi et al. (2016). In (13), they proved that the random robustness is upper bounded by the adversarial robustness under certain conditions. On the other hand, our conclusion comes from the derived bound (6) in our paper.\\n\\nb) In Fawzi et al. (2018), the theoretical results are derived under binary classification task with linear or quadratic classifiers. Our results are applicable to all kinds of classifiers. The intuition in this paper that more flexible classifiers achieve better robustness is partially consistent with our results. In our paper, we showed that a classifier with more confident classification (higher p_(1) and lower p_(2)) achieve better robustness.\\n\\nc) Adding some amount of Gaussian noise to reduce the effect of adversarial perturbations is intuitively rather straightforward. Their paper did not provide rigorous theoretical justification for their method but only reported empirical results. We proved why Gaussian noise can actually provide robustness that any adversarial attack cannot break.\\n\\n\\n2) \\n\\na) The gradient vanishing is validated in Figure 1, where we show the magnitude of the gradients for adversarial examples is much smaller than natural examples. The effectiveness of our attack is an evidence that the second-order information is not negligible. Otherwise, utilizing such information should not provide any merit. Our experiments show that introducing second-order information not only increase the magnitude of the gradients in the following steps but also make our attack strong enough to break adversarial training.\\n\\nb) Using noise to extract curvature is exactly what we are doing here. Please see the proof in Appendix A. We will cite this paper as related work.\\n\\n3) One major difference between our method and PGD with random noise is that at each step, we add noise multiple times and update the example with the average gradient. If we only add noise once as a noisy PGD, it is equivalent to estimating the expectation of a distribution with only one sample, which is not sufficient. It is essential to add noise multiple times and take the average to precisely estimate the approximated second order information. Empirically, a noisy PGD cannot break adversarial training. We will add an experiment to show this.\\n\\n4) We do not include these two details as they were fully discussed in Lecuyer, Mathias, et al. (2018). The first point is just estimating the confidence intervals of a multinomial distribution with i.i.d. samples, which is a standard statistical procedure. We will state it more explicitly in the algorithm in the revised version. For the second point, if we know the Lipschitz constant, one can think of the outputs of the first layer as the inputs and redo the analysis. The divergence between the new inputs is bounded by the Lipschitz constant multiplied by the divergence between the original inputs, as the divergence is just the L_2 norm. The estimation of the Lipschitz constant differs for different models, and we refer to Lecuyer, Mathias, et al. (2018) for details, as this is not the major focus of our paper.\\n\\nLecuyer, Mathias, et al. \\\"On the Connection between Differential Privacy and Adversarial Robustness in Machine Learning.\\\" arXiv preprint arXiv:1802.03471 (2018).\"}", "{\"title\": \"Author's response\", \"comment\": \"We thank the reviewer for the valuable comments. We respond to the questions and concerns in the following:\\n\\n1) The defense methods that try to make computing gradient difficult is called gradient obfuscating. This kind of defense has been shown vulnerable to stronger attacks. Please see https://arxiv.org/abs/1802.00420. Therefore, we do not discuss this kind of defense in our paper.\\n\\n2) We believe the effectiveness of the second order information has been shown in Figure2, where we show using second order information increase the magnitude of the gradients and reduce the accuracy of Madry's model. The point of the second order attack is not to estimate the second order information accurately, but to utilize the approximated one to improve the effectiveness of our attack.\\n\\n3) Although there is an equivalence between EOT and our attack, the motivation is totally different. EOT was proposed to attack defense models where randomness is present. The goal of EOT is to reduce the effect of randomness in a defense model by introducing the same randomness in the attack. In our attack, the randomness is used to reduce the effect of vanishing gradients and escape from \\\"degenerate global minimum\\\".\\n\\n4) The point of our paper is to demonstrate that the robustness of adversarial training cannot generalize to different choices of norms, that is, if the model is adversarially trained against L_inf norm, then it is vulnerable to L_2 attack. In the paper, we called adversarial training was not *universally robust*. In practice, it is problematic because one does not know what kind of attacks will be used by the adversaries. Ideally, a robust model should be robust to all forms of attacks.\\n\\nWe believe that the fact our bound does not rely on the property of the neural network is a strength rather than a weakness. There are works that show certifiable bounds when assuming the model is simply feedforward or Lipschitz-smooth or using some specific activation function, but cannot be extended to other models. Our analysis, on the other hand, is applicable to all different models, such as CNN, RNN, and even models that are not neural networks. \\n\\nIn addition, to show the strength of our bound, as an example, a very recent paper https://arxiv.org/pdf/1811.00866v1.pdf (accepted by NIPS 2018) also proposes a certifiable bound of robustness (we did not cite this paper as it was released very recently). In Table 4 in their paper, one can see their L_2 norm bound is close to ours in Figure 1. \\n\\n5) This form of object function has been studied by many papers. Our use of this form of the objective function is motivated by Zhang et al. (2016) Improving the Robustness of Deep Neural Networks via Stability Training. The goal of introducing this objective function is to improve the robustness against Gaussian noise, while \\\"Logit Pairing\\\" aims to improve the robustness against adversarial attacks. Logit Pairing has been shown vulnerable to adversarial attacks in https://arxiv.org/abs/1807.10272, therefore, we do not include this method. In addition, our method is equivalent to \\\"simple training with Gaussian perturbation\\\" when \\\\gamma=0. We did not include the results for different choice \\\\gamma, but we would like to add these results in the revised version.\"}", "{\"title\": \"Author's response\", \"comment\": \"We thank the reviewer for the valuable comments. We respond to the questions and concerns in the following:\\n\\n1) We are not suggesting that there is a failure of adversarial training but trying to demonstrate that such training strategy cannot generalize in terms of the choice of norms. In the paper, we called adversarial training was not *universally robust*. In practice, it is problematic because one does not know what kind of attacks will be used by the adversaries. Ideally, a robust model should be robust to all forms of attacks. Our finding shows the importance of evaluating defense model under different norms, as many defense models only focus on L_inf norm. \\n\\nWe thank the reviewer for pointing out that this phenomenon might be explained by the thresholding filters. Our second order attack used the information of the gradients, yet still successfully attacked the adversarial training model. If adversarial training on MNIST caused gradient obfuscation, any attack using gradient information cannot break it. Therefore, our result suggests adversarial training do not cause gradient obfuscation on MNIST.\\n\\n2) The bound proposed in our paper is almost twice compared to the one from Mathias et al. (2018) as suggested in Figure 4. It is difficult to improve the robustness into a different scale. As an example, a very recent paper https://arxiv.org/pdf/1811.00866v1.pdf (accepted by NIPS 2018) also proposes a certifiable bound of robustness (we did not cite this paper as it was released very recently). In Table 4 in their paper, one can see the l_2 norm bound is in the same scale as ours.\\n\\nNote without stability training, our method is the same as the one (PixelDP) proposed by Mathias et al. (2018) *in practice*, that is adding Gaussian noise and pick the output with the highest probability. Therefore, the difference between PixelDP and STN is exactly the gain from stability training from the \\\"improved certifiable robustness\\\" section.\\n\\n3) We would like to emphasize that the main points of our paper are 1) propose a new attack method that shows the weakness of adversarial training. 2) propose a certifiable defense framework that allows us to calculate a bound of robustness for a model. \\n\\nThe empirical performance of our defense is not the major concern in this paper, although its performance seems to be better than other methods in our experiments. As we pointed out in the discussion, there is a large gap between the theoretical bound and the empirical results, and it is possible that this is caused by the overestimation of the empirical performance. In general, no matter how many experiments are performed, a defense without theoretical justification is still vulnerable to unknown attacks. This is actually the motivation of our certifiable defense framework.\\n\\nWe understand that empirical evaluation is important as well. We will add more experimental results in the Appendix to help to understand the empirical performance of our method. However, as mentioned, the point of our defense is to show a certifiable bound of the robustness. In fact, many papers on certifiable defense only perform experiments on the theoretical bounds but not accuracies against attacks:\", \"https\": \"//arxiv.org/pdf/1811.00866v1.pdf\", \"minor\": \"1) We will change this sentence. What we meant is that there are some perspectives of adversarial training that are unclear.\\n2) it is the magnitude of the gradients of natural samples. The other two are the magnitude of the gradients of adversarial examples generated by PGD and SO respectively.\\n3) We will comment on that. Essentially a large noise will greatly hurt the classification accuracy, so it is important to keep a good balance.\\n4) It was trained in L_inf.\"}", "{\"title\": \"Interesting ideas, insufficient experimental evaluation.\", \"review\": \"The paper makes three rather independent contributions: a) a method for constructing adversarial examples (AE) utilizing second-order information, b) a method for certifying classifier robustness, c) a method to improve classifier robustness. I will discuss these three contributions separately.\\n\\na) Second order attack: Miyato et al. (2017) propose a method for constructing AE for the case where the gradient of the loss is vanishing. In this case, at given a point, the direction of steepest loss ascent can be approximated by the gradient at a randomly sampled nearby point. Miyato et al. (2017) show how this can be derived as a very crude approximation of the power method. The authors of the current paper apply this attack to the adversarial trained networks of Madry et al. (2017). They find that the *L_infinity* trained networks of that work are not as *L_2* robust as originally claimed. I find this result interesting, highlighting a failure case of first-order methods (PGD) for evaluating adversarial robustness. However, it is important to note that these were models that were *not* trained against an L2 attack and thus should not be expected to be very robust to one. Therefore, this result does not identify a failure of adversarial training as the authors seem to suggest but rather a failure of the original evaluation of Madry et al. (2017). It is also worth noting that this finding is specific to MNIST given the results currently presented. This might be explained by the fact that robust MNIST models tend to learn thresholding filters (Madry et al., 2017) which might cause gradient obfuscation.\\n\\nb) Adversarial robustness certification: The authors proposed a method for certifying the robustness of a model based on the Renyi divergence. The core idea is to define a stochastic classifier that randomly perturbs the input before classifying it. Given such a classifier, one can construct the probability distribution over classes. The authors prove that given the gap between the first and second most likely classes, one can construct a bound on the L2 norm of perturbations required to fool the classifier. This method is able to certify the adversarial accuracy of some classifier to relatively small epsilon values. While I think the theoretical arguments are elegant, I find the overall contribution incremental given the work of Mathias et al. (2018). Both methods seem to certify robustness of roughly the same scale. One component of the experimental evaluation missing is how does the certifiable accuracy differ between robust and non-robust models. Currently there are only results for a single model (Figure 1) and it is not clear from the text which one it is. Given that there exists a section titled \\\"improved certifiable robustness\\\" I would at least expect a result where a model with higher certifiable accuracy is constructed. \\n\\nc) Improved robustness via stability training: The authors propose a method to make a classifier more robust to input noise. They add a regulatization term to the training loss that penalizes a change in the probabilities predicted by the network when the input is randomly perturbed. In particular, they use the cross-entropy loss between the probability distributions predicted at the original and the perturbed point. The goal is to train a model that is more robust to random perturbation which will then hopefully translate to robustness to adversarial perturbation. This method is evaluated against the proposed attack (a) and is found to be more robust to that attack than previous adversarially trained models. Overall, I find the idea of stability training interesting. However I find the current evaluation severely lacking. First of all, these models should be evaluated against a standard PGD adversary (missing from Table 1). Even if that method is unreliable when applying random noise to the input at each step it is still an important sanity check. Additionally, in order to deal with the stochasticity of the model one should experiment with a PGD attack that estimates the gradient using multiple independent noise samples (see https://arxiv.org/abs/1802.00420). Finally, other attacks such as black-box attacks and finite-differences attacks should potentially be considered. Given how other defenses based purely on data augmentation during training or testing were bypassed it is important to apply a certain amount of care when evaluating the robustness of a model.\\n\\nOverall, while I think the paper contains interesting ideas, I find the current evaluation lacking. I recommend rejection for now but I would be willing to update by score based on author responses.\", \"minor_comments_to_the_authors\": \"-- Last paragraph of first page: \\\"Though successful in adversarial defensing, the underlying mechanism is still unclear.\\\", adversarial training has a fairly principled and established underlying mechanism, robust optimization. \\n-- Figure 2 left: is the natural line PGD or SO?\\n-- The standard deviation of the noise used is very large relative to the pixel range. You might want to comment on that in the main text.\\n-- Figure 3: How was the Madry model trained? L_inf or L_2?\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Review\", \"review\": \"This paper consists of two parts: a 2nd-order attack method and a certification for robustness. The paper is well written and easy to follow. However, addressing of similarity and comparison with some previous methods could be improved.\\n\\nFirst of all, the motivation of 2nd order attack is clear and reasonable: for adversarially trained model at minimax, the gradient is close to vanishing, so 2nd order information helps a lot to find actual adversarial examples in this case. However,\\n\\n1. A lot of defenses have tried to modify the networks to make even computing the gradient difficult if not impossible. In this case, how effective is the 2nd order attack? I would like to see some discussion of this.\\n\\n2. While the starting point seems like a powerful attack method. The 2nd order information is only approximately computed via finite differences. A powerful method with weak approximation will sounds more powerful than a weak method to start with, but the actual effectiveness will need more systematic comparison. I think adding some studies of the accuracy or variances of the 2nd order information with the proposed approximation method (under natural setting and maybe also under the setting where the networks are modified to make even 1st order information hard to compute) would definitely help.\\n\\n3. Also after the approximation, as mentioned in the paper, the algorithm becomes equivalent to EOT attacks with Gaussian noises. The EOT attacks are also more general to allow different types of noises. While it might not make lots of sense to compare with EOT attacks in the experiments as the two algorithms seem to be exactly the same, it would help of more discussions could be devoted to justify how the proposed algorithm is novel given the previously existed EOT attack.\\n\\n4. In the experiments on adversarially trained models, the adversarial trained models are trained against l_inf attack, while the actual attack is l2. This seems unfair. Since the author mentioned that it is easy to extend their method to l_inf attack. It would be more justifiable if the results with matching attack types are shown instead of the current ones.\\n\\nThe certified robustness is an interesting take, too. However, the bounds might be too strong: as far as I understand, it does not rely much on the properties of the underlying neural networks f. So in order to be applicable to all kinds of weird non-robust neural networks uniformly, the bounds cannot be too tight. To get useful certificate level, a too heavy noise level sigma might be needed and potentially destroys the classification accuracy of the original model f. This is acknowledged in the 'gap between theory and empirical' section. And it makes the importance of such kind of bounds a bit weak.\\n\\n5. Also, the 'stability training' procedures derived based on this bounds is quite similar to some previous methods. For example, the objective function is very similar to 'logit pairing', which add an extra term to bound the similarity between two logits from an adversarial or noisy version. The empirical results will be much stronger if more closely related methods are included in the comparison. For example, logit pairing, as well as simple training with Gaussian perturbation on inputs.\\n\\nIn summary, this paper provide some interesting perspectives to adversarial attacks and certifications. However, the main algorithms are very similar to some existing methods, more discussion could be used to compare with the existing literature and clarify the novelty of the current paper. The empirical results could also be made more stronger by including more relevant baseline methods and more systematic study of the effectiveness of some approximation methods adopted\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"The paper lacks clarity and needs to better contrast their work to existing results\", \"review\": \"This paper makes two different contributions in the field of adversarial training and robustness.\\nFirst the authors introduce a new type of attack that exploits second-order information while traditional attacks typically rely on first-order information.\\nAnother contribution is a theorem that using the Renyi divergence certifies robustness of a classifier by adding Gaussian noise to pixels.\\n\\nOverall, I find that the paper lacks clarity and does not properly contrast their work to existing results. They are also some issues with the evaluation results. I provide detailed feedback below.\\n\\n1) Prior work\\na) Connection between adversarial defense and robustness to random noise\\nThis connection is established in Fawzi, A., Moosavi-Dezfooli, S. M., & Frossard, P. (2016). Robustness of classifiers: from adversarial to random noise. In Advances in Neural Information Processing Systems (pp. 1632-1640).\\nb) Connection between minimal perturbation required to confuse classifier and its confidence was discussed for the binary classification in Section 4 of\\nFawzi, Alhussein, Omar Fawzi, and Pascal Frossard. \\\"Analysis of classifiers\\u2019 robustness to adversarial perturbations.\\\" Machine Learning 107.3 (2018): 481-508.\\nc) The idea to compute the distribution of classifier outputs when the input is convolved with Gaussian noise was already \\u201canticipated\\u201d in Section V of the following paper which relates the minimum perturbation needed to fool a model to it\\u2019s misclassification rate under Gaussian convolved input:\\nLyu, Chunchuan, Kaizhu Huang, and Hai-Ning Liang. \\\"A unified gradient regularization family for adversarial examples.\\\" Data Mining (ICDM), 2015 IEEE International Conference on. IEEE, 2015.\\n\\nThese papers should be discussed in the paper, please elaborate how you see your contribution regarding the results derived there.\\n\\n2) Second-order attack introduced in the paper\\nI think they are a number of important details that are ignored in the presentation.\\na) Regarding the assumption that the gradient vanishes in the difference of the loss, I think the authors should elaborate as to why this is a reasonable assumption to make. If we assume that the classifier has been trained to optimality then expanding the function at this (near-)optimum would perhaps indeed yield to a gradient term of small magnitude (assuming the function is smooth). However, nothing guarantees that the magnitude of the gradient term is negligible compared to the second-order information. The boundary of the classifier could very well be in a region of low-curvature.\\nb) The approximation of the second-order information is rather crude. However, the update derived is very similar to PGD with additional noise. In optimization, the use of noise is known to extract curvature, see e.g. (Xu & Yang, 2017) who showed that noisy gradient updates act as a noisy Power method that extracts negative curvature direction.\\nXu, Y., & Yang, T. (2017). First-order Stochastic Algorithms for Escaping From Saddle Points in Almost Linear Time. arXiv preprint arXiv:1711.01944.\\n\\n3) Issue of \\\"degenerate global minimum\\\": The authors argue that multistep attacks also suffer from this issue. However, the PGD attack of Madry is also initialized at a random point within the uncertainty ball around x, i.e. PGD attack first adds random noise to x before iteratively ascending the loss function. This PGD update + noise at first iteration seem rather similar to the update derived by the authors that uses random noise at every iteration. It could therefore be that the crude approximation of second-order information is not so different from previous work. This should be further investigated either theoretically or empirically.\\n\\n4) Lack of details regarding some important aspects in the paper\\na) \\u201cNote the evaluation requires adjustment and computing confidence intervals for p(1) and p(2), but we omit the details as it is a standard statistical procedure\\u201d\\nThe authors seem to sweep this under the carpet but this estimation procedure gives only an estimate of the required quantities p(1) and p(2), which I think would require adjusting the result in the theorem to be a high probability bound (or an expectation bound) instead of a deterministic result.\\n\\nb) \\u201cthe noise is not necessarily added directly to the inputs but also to the first layer of a DNN. Given the Lipschitz constant of the first layer, one can still calculate an upper bound using our analysis. We omit the details here for simplicity\\u201d\\nWhat exactly changes here? How do you estimate the Lipschitz constant in practice?\\n\\n\\n5) Main theorem needs to be contrasted to previous results\\nThe main Theorem uses the Renyi divergence certifies robustness of a classifier by adding Gaussian noise to pixels. There are already many results in the field of robust optimization that already derive similar results, see e.g.\\nNamkoong, H., & Duchi, J. C. (2017). Variance-based regularization with convex objectives. In Advances in Neural Information Processing Systems (pp. 2971-2980).\\nGao, R., & Kleywegt, A. J. (2016). Distributionally robust stochastic optimization with Wasserstein distance. arXiv preprint arXiv:1604.02199.\\nCan you elaborate on the difference between your bounds and these ones? You do mention some of them require strong assumptions such as smoothness but this actually seems like a mild assumption (although some activation functions used in neural nets are indeed not smooth).\\n\\n6) Adversarial Training Overfit to the Choice of norms\\nThe main theorem derived in the paper uses the l_2 norm. What can be said regarding other norms?\\n\\n7) Experiments:\\na) the authors only report accuracies for attacks whose l2-norm is smaller than a fixed constant 0.8. However, this makes the results difficult to interpret and the authors should instead state the signal to noise ratio, i.e. dividing the l2-norm of the perturbation by the l2-norm of the image. Otherwise, it is not clear how strong or weak such perturbations are. (In particular, the norm depends on the dimension of the image, so l2-norms of perturbations for MNIST and CIFAR10 are not comparable).\\nb) In Section 6.2, the authors state that an l_infty trained model is vulnerable against l_2 perturbations. Why not training the model under both l_infty and l_2 perturbations?\\nc) Figure 1\\nBased on the results predicted in Theorem 2, it seems it would be more interesting to evaluate the largest L for which the classifier predictions are the same. Why did you report a different results?\\n\\n8) Other comments\\nsection 2.1: \\u201cNote this distribution is different from the one generated from softmax\\u201d. Why/How is this different?\\nconnection to EOT attack\\u2019: authors claim: E_{d\\u223cN(0,\\u03c32I)} [\\u2207_x L(\\u03b8, x, y)|x+d] = \\u2207_x E_{d\\u223cN(0,\\u03c32I)} [\\u2207_x L(\\u03b8, x, y)|x+d]. There is a typo on the RHS where \\u2207_x is repeated twice. This is also the common reparametrization trick so could cite \\nKingma, D. P., & Welling, M. (2013). Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
Syl6tjAqKX
BEHAVIOR MODULE IN NEURAL NETWORKS
[ "Andrey Sakryukin", "Yongkang Wong", "Mohan S. Kankanhalli" ]
Prefrontal cortex (PFC) is a part of the brain which is responsible for behavior repertoire. Inspired by PFC functionality and connectivity, as well as human behavior formation process, we propose a novel modular architecture of neural networks with a Behavioral Module (BM) and corresponding end-to-end training strategy. This approach allows the efficient learning of behaviors and preferences representation. This property is particularly useful for user modeling (as for dialog agents) and recommendation tasks, as allows learning personalized representations of different user states. In the experiment with video games playing, the resultsshow that the proposed method allows separation of main task’s objectives andbehaviors between different BMs. The experiments also show network extendability through independent learning of new behavior patterns. Moreover, we demonstrate a strategy for an efficient transfer of newly learned BMs to unseen tasks.
[ "Modular Networks", "Reinforcement Learning", "Task Separation", "Representation Learning", "Transfer Learning", "Adversarial Transfer" ]
https://openreview.net/pdf?id=Syl6tjAqKX
https://openreview.net/forum?id=Syl6tjAqKX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "Byet4LjBlE", "HyxcQgBonQ", "H1gNs4tqhX", "BygeJNU5hX" ], "note_type": [ "meta_review", "official_review", "official_review", "official_review" ], "note_created": [ 1545086512757, 1541259298000, 1541211292364, 1541198808257 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper487/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper487/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper487/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper487/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"This paper takes inspiration from the brain to add a behavioral module to a deep reinforcement learning architecture. Unfortunately, the paper's structure and execution lacks clarity and requires a lot more work: as noted by reviewers, the link link between motivation and experiments is too fuzzy and their execution is not convincing.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Interesting idea that requires more work\"}", "{\"title\": \"Review\", \"review\": [\"# Summary\", \"This paper proposes to learn behaviors independently from the main task. The main idea is to train a behavior classifier and use domain-adversarial training idea to make the features invariant to sources of behaviors for transfer learning to new behaviors/tasks. The results on Atari games show that the proposed idea learns new behavior more quickly than the baseline approaches.\", \"[Cons]\", \"Some descriptions are ambiguous, which makes it hard to understand the core idea and goal of this paper.\", \"The experimental setup is not well-designed to show the benefit of the idea.\", \"# Comments\", \"This overall idea is a straightforward extension from domain-adversarial learning except that this paper considers transfer learning in RL.\", \"The goal/motivation of this paper is not very clearly described. It seems like there is a \\\"main task\\\" (e.g., maximizing scores in Atari games) and \\\"behavior modules\\\" (e.g., specific action sequences). It is unclear whether the goal of this paper is to learn 1) the main task, 2) learning new behavior modules quickly, or 3) learning new (main) tasks quickly. In the abstract/introduction, the paper seems to address 3), whereas the actual experimental result aims to solve 2). The term \\\"task\\\" in this paper often refers to \\\"main task\\\" or \\\"behavior\\\" interchangeably, which makes it hard to understand what the paper is trying to do.\", \"The experiment is not well-designed. If the main focus of the paper is \\\"transfer to new tasks\\\", Atari is a not a good domain because the main task is fixed. Also, behavior modules are just \\\"hand-crafted\\\" sequences of actions. Transfer learning across different behaviors are not interesting unless they are \\\"discovered\\\" in an unsupervised fashion.\", \"The paper claims that \\\"zero-shot\\\" transfer is one of the main contributions. Zero-shot learning by definition does not require any additional learning. However, they \\\"trained\\\" the network on the new behavior modules (only the main network is fixed), which is no longer \\\"zero-shot\\\" learning.\"], \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Review for Paper BEHAVIOR MODULE IN NEURAL NETWORKS\", \"review\": \"The authors try to build a deep neural network model based on observations from the human brain Pre-Frontal Cortex connectivity. Based on a DQN network, the authors add additional fully connected layers as Behavior Module to encode the agent behavior and add the Discriminator to transfer information between behavior modules. The authors experiment on four different games and evaluate based on metrics game scores and behavior distance.\\n\\nOverall the quality of the paper is low and I recommend to reject it.\\n\\n[Weakness in Details]\\n1. I am not convinced that the proposed algorithm actually solves/works as described in the motivation. Moreover, the whole framework just adopts existing algorithms(like DQN and adversarial training) which provides little technical contribution.\\n\\n2. I am skeptical about the motivation whether mimicking the human brain Pre-Frontal Cortex connectivity can really result in a better neural network model. The poor execution and insufficient evaluation of this work prevent me from getting a clear answer.\\n\\n3. It is very strange that the authors emphasize that \\\"This property is particularly useful for user modeling (as for dialog agents) and recommendation tasks, as allows learning personalized representations of different user states.\\\" while in the end doing experiments with video games playing. There are tons of public recommendation data sets out there, why not experiment on recommendation, which has much clear(well-established) evaluation metrics and public-domain datasets that can make it easier for others to repeat the experiments.\\n\\n4. The experiments are insufficient and the baselines are weak. Lots of state of artworks are left out.\\n\\n5. The writing of this paper needs further improvement and parts of this paper is not clearly written which makes it challenging for readers to follow the authors' ideas.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Interesting Paper, underwhelming experiments\", \"review\": \"This paper introduces a \\\"behavior module\\\" (BM) which is a small network that encodes preferences over actions and runs parallel to the fully connected layers of a policy. The paper shows this architecture working in Atari games, where the same policy can be used to achieve different action preferences over a game while still playing well. It also includes a thorough recap of past modular approaches.\\n\\nThe motivation for the BM is that we may want deep networks to be able to decompose \\\"strategy\\\" and \\\"behavior\\\", where behavior may influence decisions without affecting the performance. In this framework, the BM is trained on a reward of correctness + personalized \\u201csatisfaction\\u201d.\\n\\nThe experiments model behavior as preferences over how many actions to play simultaneously. The trained BMs can be transferred to new tasks without finetuning. The ideas here also have some similarity to the few shot learning literature.\", \"comments_on_the_experiments\": \"1. The Table 2 do not show a smooth interpolation between reward scaling and AMSR vs BD. This is surprising because the performance on the game should be highest went it is weighted to the most. This indicates to me that the results are actually high variance, the 0.8 vs 0.88 in stage 2 of 0.25r vs 0.5r means that is probably at least +/- 0.08 standard deviation. Adding standard deviations to these numbers is important for scientific interpretability.\\n2. I expect some BMs should perform much better than others (as they have been defined by number of actions to play at once). I would like to see (maybe in the appendix) a table similar to table 2 for for individual BMs. I currently assume the numbers are averaged over all BMs.\\n3. Similarly, I would like to see the BD for BM0 (e.g., if a policy is not optimized for any behavior, how close does it get to the other behaviors on average). This is an important lower bound that we can compare the other BD to. \\n4. An obvious baseline missing is to directly weight the Q values of the action outputs (instead of having an additional network) by the designed behavior rewards. There is an optimal way to do this because of experimental choices.\", \"questions\": \"1.For BM2, you write \\\" Up and Down (or Right and Left)\\\" did you mean \\\"Up and Right\\\"? How can Up and Down be played at the same time?\\n\\nOverall, this paper uses neuroscience to motivate a behavior module. However, the particular application and problem settings falls short of these abstract \\\"behaviors\\\". Currently, the results are just showing that RL optimizes whatever reward function is provided, and that architectural decomposition allows for transfer, which was already showed in (Devin 2017). An experiment which would better highlight the behavior part of the BM architecture is the following:\\n1. Collect datasets of demonstrations (e.g. on atari) from different humans.\\n2. Train a policy to accomplish the task (with RL)\\n3. Train BMs on each human to accomplish the task in the style of each human.\\nThis would show that the BMs can capture actual behavior. \\n\\nThe dialog examples discussed in the abstract would also be very exciting.\\n\\nIn conclusion, I find the idea interesting, but the experiments do not show that this architecture can do anything new. The abstract and introduction discuss applications that would be much more convincing. I hope to see experiments with a more complex definition of \\\"behavior\\\" that cannot be handcoded into the Q function.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
HkxaFoC9KQ
Deep reinforcement learning with relational inductive biases
[ "Vinicius Zambaldi", "David Raposo", "Adam Santoro", "Victor Bapst", "Yujia Li", "Igor Babuschkin", "Karl Tuyls", "David Reichert", "Timothy Lillicrap", "Edward Lockhart", "Murray Shanahan", "Victoria Langston", "Razvan Pascanu", "Matthew Botvinick", "Oriol Vinyals", "Peter Battaglia" ]
We introduce an approach for augmenting model-free deep reinforcement learning agents with a mechanism for relational reasoning over structured representations, which improves performance, learning efficiency, generalization, and interpretability. Our architecture encodes an image as a set of vectors, and applies an iterative message-passing procedure to discover and reason about relevant entities and relations in a scene. In six of seven StarCraft II Learning Environment mini-games, our agent achieved state-of-the-art performance, and surpassed human grandmaster-level on four. In a novel navigation and planning task, our agent's performance and learning efficiency far exceeded non-relational baselines, it was able to generalize to more complex scenes than it had experienced during training. Moreover, when we examined its learned internal representations, they reflected important structure about the problem and the agent's intentions. The main contribution of this work is to introduce techniques for representing and reasoning about states in model-free deep reinforcement learning agents via relational inductive biases. Our experiments show this approach can offer advantages in efficiency, generalization, and interpretability, and can scale up to meet some of the most challenging test environments in modern artificial intelligence.
[ "relational reasoning", "reinforcement learning", "graph neural networks", "starcraft", "generalization", "inductive bias" ]
https://openreview.net/pdf?id=HkxaFoC9KQ
https://openreview.net/forum?id=HkxaFoC9KQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "rJlfgRJlgV", "BJxkrbE5CX", "ByxqT6b9RQ", "ryemxVtKAQ", "HJeh45lmCm", "H1gQS1H367", "B1lNr-4nT7", "SJg6aYHuaQ", "BJxWAEv0nX", "Syg-myA9n7", "Bkee1fmqhX" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544711657901, 1543287095057, 1543278017921, 1543242731126, 1542814259677, 1542373178945, 1542369595933, 1542113733000, 1541465288622, 1541230361200, 1541186008040 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper486/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper486/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper486/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper486/Authors" ], [ "ICLR.cc/2019/Conference/Paper486/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper486/Authors" ], [ "ICLR.cc/2019/Conference/Paper486/Authors" ], [ "ICLR.cc/2019/Conference/Paper486/Authors" ], [ "ICLR.cc/2019/Conference/Paper486/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper486/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper486/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"The paper presents a family of models for relational reasoning over structured representations. The experiments show good results in learning efficiency and generalization, in Box-World (grid world) and StarCraft 2 mini-games, trained through reinforcement (IMPALA/off-policy A2C).\\n\\nThe final version would benefit from more qualitative and/or quantitative details in the experimental section, as noted by all reviewers. \\n\\nThe reviewers all agreed that this is worthy of publication at ICLR 2019. E.g. \\\"The paper clearly demonstrates the utility of relational inductive biases in reinforcement learning.\\\" (R3)\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"A significant study of relational inductive biases in DRL\"}", "{\"title\": \"Reply to authors' response\", \"comment\": \"Thanks for the response. Most of my concerns are addressed. I think this work is a nice contribution to the community.\"}", "{\"title\": \"Thanks for the response\", \"comment\": \"I believe the authors have addressed most of my comments and the revision has certainly improved the quality of the paper. I still think the overall contribution of the paper is very limited however I agree with the authors that it is indeed an important step towards generalizing RL approaches. In that light, I have adjusted my score and support this paper for acceptance.\"}", "{\"title\": \"paper revision\", \"comment\": \"We have now submitted a revised version of the paper addressing the criticisms and suggestions from all 3 reviewers. We have also included the results of a new set of experiments using the relational module in combination with different RL algorithms (A3C and distributed DQN), which more clearly demonstrate its general applicability. These results are mentioned in the main text and summarized in Figure 7 in Appendix.\"}", "{\"title\": \"Thanks for the response!\", \"comment\": \"Thanks for the thorough response --- I appreciate the additional clarifications being added to the text, and I completely understand that the resource-intensive nature of StarCraft makes some quantitative results difficult to obtain!\"}", "{\"title\": \"response to reviewer 3\", \"comment\": \"Thank you for your review! Our goal was precisely to show the utility of relational inductive biases in RL and we are very pleased to know you found the evidence we presented compelling.\", \"regarding_your_suggestions\": \"1) Thank you for pointing this out. We agree that a mention to NerveNet is justified. We will include a sentence in the text comparing the approaches.\\n\\n2) We agree this is a relevant discussion point. As we mentioned in a separate response, using self-attention diminishes the impact of the quadratic complexity compared to other approaches -- e.g Relation Networks (Santoro et al. 2017). This is due to the quadratic computation being reduced to a single matrix multiplication (dot product). Having said this, your point is still a valid one. We are happy to include a discussion point mentioning the scalability challenges and highlight some possible approaches to mitigate this issue.\\n\\n3) While we agree that further quantitative detail would benefit the paper, due to the resource intensive nature of StarCraft, we were faced with a harder constraint on the number of hyperparameter and seeds that we could test in each experiment. That being said, we are now running additional tests and computing standard errors to address your points and provide more information about the performance gap between the agents.\\n\\nThank you for spotting the incorrect use of the word \\\"seeds\\\" in the caption of Figure 8. To clarify, we ran around 100 combinations of hyperparameters for each mini-game (which included 3 different seeds) as described in page 13. We then used the 10 best runs (not seeds), out of 100, to generate the plot. Regarding the drop in performance after the 10th best run, it follows a linear decay, akin to what we observe for the top 10 runs. We will update the text accordingly to make both points clear.\"}", "{\"title\": \"response to reviewer 1\", \"comment\": \"Thank you for your thorough review and suggestions, we are grateful you appreciated the work!\\n\\nTo answer your points, one by one:\\n\\n> Presentation\\n\\nThank you for the suggestion. We will add details about each of the StarCraft mini-games in the text to give a better intuition about the task requirements.\\n\\n> Evaluation\\n\\n1) Indeed we ran experiments using the model described in Santoro et al, 2017 as the \\u201crelational component\\u201d in our agent. We observed that, while the agents were able to learn the task to a certain extent, the training was extremely slow in Box-World and prohibitive in StarCraft-II. We attribute this to the application of a relatively large MLP over each pair of entities (N^2 elements). In fact, this is one of the reasons that attracted us to the multi-head attention to begin with, for its ability to compute pairwise interactions very efficiently -- through a single matrix multiplication (inner product) -- and instead apply an MLP over the resulting N entities (rather than N^2). \\n\\n2) We generally agree with your comment. First, it is not obvious the degree to which real-world tasks require explicit relational reasoning. Second, more conventional models, e.g. ConvNets, are capable of a form of relational reasoning, in the sense that they learn the relationships between image patches. Regarding the first point, we have seen recently an increasing number of publications using similar mechanisms to achieve SOTA in a variety of real-world tasks, e.g. visual question answering (Malinowski et al, 2018), face recognition (Xie et al, 2018), translation (Vaswani et al, 2017). This suggests that indeed more explicit ways of comparing/relating different entities helps solving real-world tasks. Regarding the second point, our view is that a capacity to learn relations in a non-local manner (as expressed by Wang et al, 2017) -- i.e. irrespective of how proximal the entities being related are -- will be critical to achieve a satisfying level of generalization in our RL agents. Our results support this hypothesis, but we acknowledge that more work is needed using real-world applications to further establish this idea.\\n\\n> Novelty\\n\\nWe agree with you that the focus is not on the novelty of these components themselves, but instead on the combination of these for RL, together with careful analyses and evaluation. The sentence you mention might be misleading in that regard and so we propose to change it in the revised version of the paper.\\n\\n> Length of distractor branches\\n\\nYes, the length of the distractor branches still matters. In order for an agent not to take the wrong branch (with perfect confidence) it needs to know the consequences of opening the whole sequence of boxes along that branch before opening the first box in that branch. For that matter, it is irrelevant that the level terminates after the first wrong decision, except for the fact that it reduces the amount of time spent on a level that cannot be solved anymore.\\n\\n> Missing references\\n\\nThank you for the references. These are indeed related to our work and deserve to be mentioned. We will include them.\"}", "{\"title\": \"response to reviewer 2\", \"comment\": \"Thank you for your review! We appreciate your suggestions to improve the submission.\", \"to_answer_each_of_your_points\": \"1) We agree that a hard comparison between our approach and model-based planning cannot be made. Our attempt was to bring this as a point of discussion rather than making a strong claim about their parallels. We are happy to revise the text where this is mentioned in the direction of toning down the comparison and avoid confusion.\\n\\n2) We tried to be careful throughout the paper not to suggest that the novelty of this work lies on these two components: pairwise interactions and self-attention. Instead, and as mentioned by Reviewer 1, we argue that the combination of learnable representations of entities and self-attention in an RL setting is a significant innovation that has not been attempted before. This was a non-trivial effort, especially when applied to complex RL tasks such as StarCraft-II. Perhaps most importantly, however, it was not clear before that pairwise interactions themselves could allow for improved generalization.\\n\\nWe believe this work is a small but important step that moves us towards addressing some of the criticism deep RL has received (namely, an inability to flexibly generalize 'out-of-distribution') by focusing on entity and relation-centric representations, as used in more symbolic approaches.\\n\\n3) Thank you for the suggestion. We agree that showing that the results extend to other model-free algorithms would make the paper stronger. We tested an asynchronous advantage actor-critic (A3C) agent early on and the results were similar, but we will re-run these experiments now, alongside an off-policy value-based RL algorithm (DQN), to get exact numbers.\\n\\n4) We appreciate your concerns here. We would like to clarify that indeed the Box-World levels have the features that you propose. Every level is randomly generated in almost every aspect, assuring that: (1) the box containing the gem changes in every level; (2) the colors of the boxes are randomly shuffled in every level; (3) the spatial position of each box is randomly chosen in every level. This random generation of levels makes the problem very hard. In fact the number of possible combinations is so large that the agents we trained on this task never encounter the same level twice. An agent that solves the training levels to 100%, like the relational agent that we proposed, is capable of solving previously unseen levels without making a single mistake.\\n\\n5) We found that it was useful to include a shared non-linear transformation over the elements that resulted from the attention mechanism, itself only comprising a weighted sum of elements produced by a single linear transformation. Informally speaking, while the attention produces mixtures of entities, the extra non-linearity (g_theta MLP) gives the model the capacity to compute more complex relationships between the entities. This is analogous to what is done in Relation Networks, by Santoro et al. 2017, described as having the role of \\u201cinfer[ing] the ways in which two objects are related\\u201d. We are happy to include a sentence in the text to provide this intuition.\"}", "{\"title\": \"Relational Inductive Bias for Deep Reinforcement Learning\", \"review\": \"The goal of this paper is to enhance model-free deep reinforcement techniques with relational knowledge about the environment such that the agents can learn interpretable state representations which subsequently improves sample complexity and generalization ability of the approach. The relational knowledge works as an inductive bias for the reinforcement learning algorithm and provides better understanding of complex environment to the agents.\\nTo achieve this, the authors focus on distributed advantage actor-critic algorithm and propose a shared relational network architecture for parameterizing the actor and critic network. The relational network contains a self-attention mechanism inspired from recent work in that area. Using these new modules, the authors conduct evaluation experiments on two different environment - synthetic Box World and real-world StarCraft-II minigames where they analyze the performance against non-relational counterparts, visualize the attention weights for interpretability and test on out-of-training tasks for generalizability.\\n\\nOverall, the paper is well written and provide good explanation of proposed method. The experimental evaluation adequately demonstrates superior performance in terms of task solvability (strong result) and generalizability (to some extent). The idea of introducing relational knowledge into deep reinforcement learning algorithm is novel and timely considering the usefulness of relational representations. However, there are several shortcomings that makes this paper weak:\\n\\n1.) While it is true that relational representations help to achieve more generalizable approach and some interpretability to learning mechanism, however comparing it to model-based approaches seems a stretch. While the authors themselves present this speculatively in conclusion, they do mention it in abstract and try to relate to model-based approaches. \\n2.) The relational representation network using pairwise interaction itself is not novel and has been studied extensively. Similarly the self-attention mechanism used in this paper is already available. \\n3. ) Further, the author chose a specific A2C algorithm to add their relational module. But how about other model-free algorithms? Is this network generalizable to any such algorithm? If yes, will they see similar boost in performance? A comparison/study on using this as general module for various model-free algorithms would make this work strong.\\n4.) I have some concerns on generalizability claims fro Box World tasks. Currently, the tasks shown are either on levels that require a longer path of boxes than observed or using a key lock combination never used before. But this appears to be a very limited setting. What happens if one just changes the box with a gem between train and test? What happens if the colors of boxes are permuted while keeping the box as it is. I believe the input are parts of scene so how does change in configuration of the scene affect the model's performance?\\n5.) What is the role of extra MLP g_theta after obtaining A?\\n\\nOverall it is very important that the authors present some more analysis on use of relational module to generalize across different algorithms or explain the limitations with it. Further it is not clear what are the contributions of the paper other than parameterizing the actor-critic networks with an already known relational and attention module.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Interesting analysis and evaluation of self-attention + relation network in RL; question about novelty\", \"review\": \"This work presents a quantitative and qualitative analysis and evaluation of the self-attention (Vaswani et al., 2017) mechanism combined with relation network (Santoro et al., 2017) in the context of model-free RL. Specifically, they evaluated the proposed relational agent and a control agent on two sets of tasks. The first one \\u201cBox-World\\u201d is a synthetic environment, which requires the agent to sequential find and use a set of keys in a simple \\u201cpixel world\\u201d. This simplifies the perceptual aspect and focuses on relational reasoning. The second one is a suite a StarCraft mini-games. The proposed relational agent significantly outperforms the control agent on the \\u201cBox-World\\u201d tasks and also showed better generalization to unseen tasks. Qualitative analysis of the attention showed some signs of relational reasoning. The result on StarCraft is less significant besides one task \\u201cDefeat Zerglings and Banelings\\\". The analysis and evaluation are solid and interesting.\", \"presentation\": \"The paper is well written and easy to follow. The main ideas and experiment details are presented clearly (some details in appendix). \\n\\nOne suggestion is that it would help if there can be some quantitive characteristics for each StarCraft task to help the readers understand the amount of relational reasoning required, for example, the total number of objects in the scene, the number of static and moving objects in the scene, etc.\", \"evaluation\": \"The evaluation is solid and the qualitative analysis on the \\u201cBox-world\\u201d tasks is insightful. Two specific comments below:\\n\\n1. The idea is only compared against a non-relational \\\"control agent\\u201d. It would be interesting to compare with other forms of relation networks, for example, the ones used in (Santoro et al, 2017). This could help evaluate the effectiveness of self-attention for capturing interactions. \\n\\n2. The difference between relational and control agent is quite significant on the synthetic task but less so on the StarCraft tasks, which poses the question of what kind of real-world tasks requires the relational reasoning, and what type of relational reasoning is already captured by a simple non-relational agent.\", \"question_about_novelty\": \"This paper claims it presents \\u201ca new approach for representing and reasoning\\u2026\\u201d. However, the idea of transforming feature map into \\u201centity vectors\\u201d and self-attention mechanism are already introduced and the proposed approach is more like a combination of both. That being said, the analysis and evaluation of these ideas in RL are new and interesting.\", \"one_minor_question\": \"since a level will terminate immediately if a distractor box is opened, does the length of the distractor branches still matter?\\n\\nDespite the question about novelty, I think the analysis in the paper is solid and interesting. So I support the acceptance of this paper.\", \"missing_references\": \"In the conclusion section, several related approaches for complex reasoning are discussed. It might be also worth exploring the branch of work (Reed & Freitas, 2015; Neelakantan et al, 2015; Liang et al, 2016) that learns to perform multi-step reasoning by generating compositional programs over structured data like tables and knowledge graph. \\n\\nReed, Scott, and Nando De Freitas. \\\"Neural programmer-interpreters.\\\" arXiv preprint arXiv:1511.06279 (2015).\\nNeelakantan, Arvind, Quoc V. Le, and Ilya Sutskever. \\\"Neural programmer: Inducing latent programs with gradient descent.\\\" arXiv preprint arXiv:1511.04834 (2015).\\nLiang, C., Berant, J., Le, Q., Forbus, K. D., & Lao, N. (2016). Neural symbolic machines: Learning semantic parsers on freebase with weak supervision. arXiv preprint arXiv:1611.00020.\", \"typo\": \"\", \"page_1\": \"\\\"using using sets...\\\"\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"A compelling contribution that could benefit from some more quantitative detail\", \"review\": \"The authors present a deep reinforcement learning approach that uses a \\u201cself-attention\\u201d/\\u201ctransformer\\u201d-style model to incorporate a strong relational inductive bias. Experiments are performed on a synthetic \\u201cBoxWorld\\u201d environment, which is specifically designed (in a compelling way) to emphasize the need for relational reasoning. The experiments on the BoxWorld environment clearly demonstrate the improvement gained by incorporating a relational inductive bias, including compelling results on generalization. Further experimental results are provided on the StarCraft minigames domain. While the results on StarCraft are more equivocal regarding the importance of the relational module\\u2014the authors do set a new state of the art and the results are suggestive of the potential utility of relational inductive biases in more general RL settings.\\n\\nOverall, this is a well-written and compelling paper. The model is well-described, the BoxWorld results are compelling, and the performance on the StarCraft domain is also quite strong. The paper clearly demonstrates the utility of relational inductive biases in reinforcement learning.\", \"in_terms_of_areas_for_potential_improvement\": \"1) With regards to framing, a naive reader would probably get the impression that this is the first-ever work to consider a relational inductive bias in deep RL, which is not the case, as the NerveNet paper (Wang et al., 2018) also considers using a graph neural network for deep RL. There are clear differences between this work and NerveNet\\u2014most prominently, NerveNet only uses a relational inductive bias for the policy network by assuming that a graph-structured representation is known a priori for the agent. Nonetheless, NerveNet does also incorporate a relational inductive bias for deep RL and shows how this can lead to better generalization. Thus, this paper would be improved by properly positioning itself w.r.t. NerveNet and highlighting how it is different. \\n\\n2) As with other work using non-local neural networks (or fully-connected GNNs), there is the potential issue of scalability due to the need to consider all input pairs. A discussion of this issue would be very useful, as it is not clear how this approach could scale to domains with very large input spaces. \\n\\n3) Some details on the StarCraft experiments could be made more rigorous and quantitative. In particular, the following instances could benefit from more experimental details and/or clarifications:\", \"figure_6\": \"The performance of the control model and relational model seem very close. Any quantitative insight on this performance gap would improve the paper. For instance, is the gap between these two models significantly larger than the average gap between runs over two different random seeds? It would greatly strengthen the paper to clarify that quantitive aspect.\", \"page_8\": \"\\u201cwhile the former adopted a \\\"land sweep strategy\\\", controlling many units as a group to cover the space, the latter managed to independently control several units simultaneously, suggesting a finer grained understanding of the game dynamics.\\u201d This is a great insight, and the paper would be greatly strengthened by some quantitive evidence to back it up (if possible). For instance, you could compute the average percentage of agents that are doing the same action at any point in time or within some distance from each other, etc. Adding these kinds of quantitative statistics to back up these qualitative insights would both strengthen the argument, while also making it more explicit how you are coming to these qualitative judgements.\", \"figure_8_caption\": \"\\u201cColored bars indicate mean score of the ten best seeds\\u201d \\u2014 how bad is the drop to the n-10 non-best seeds? And how many seeds where used in total?\", \"page_13\": \"\\u201cfollowing Table 4 hyperparameter settings and 3 seeds\\u201d \\u2014 if three seeds are used in these experiments, how are 10+?? seeds used for the generalization experiments? The main text implies that the same models for the \\u201cCollect Mineral Shards\\u201d were re-used, but it appears that many more models with different seeds were trained specifically for the generalization experiment. This should be clarified. Alternatively, it is possible that \\u201cseeds\\u201d refers to both random seeds and hyperparameter combinations, and it would improve the paper to clarify this. It is possible that I missed something here, but I think it highlights the need for further clarification.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
S1E3Ko09F7
L-Shapley and C-Shapley: Efficient Model Interpretation for Structured Data
[ "Jianbo Chen", "Le Song", "Martin J. Wainwright", "Michael I. Jordan" ]
Instancewise feature scoring is a method for model interpretation, which yields, for each test instance, a vector of importance scores associated with features. Methods based on the Shapley score have been proposed as a fair way of computing feature attributions, but incur an exponential complexity in the number of features. This combinatorial explosion arises from the definition of Shapley value and prevents these methods from being scalable to large data sets and complex models. We focus on settings in which the data have a graph structure, and the contribution of features to the target variable is well-approximated by a graph-structured factorization. In such settings, we develop two algorithms with linear complexity for instancewise feature importance scoring on black-box models. We establish the relationship of our methods to the Shapley value and a closely related concept known as the Myerson value from cooperative game theory. We demonstrate on both language and image data that our algorithms compare favorably with other methods using both quantitative metrics and human evaluation.
[ "Model Interpretation", "Feature Selection" ]
https://openreview.net/pdf?id=S1E3Ko09F7
https://openreview.net/forum?id=S1E3Ko09F7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "ryxR2SRxgE", "B1xadhicT7", "rkeAS2s5pm", "SyloPoi96m", "Hyep4ji56Q", "HygvWsscTX", "SyxgP3dn2Q", "SkeChVXt27", "HyenmOMRoX", "rylA6HRv57", "BkgYeky15Q" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "comment" ], "note_created": [ 1544770997605, 1542270068985, 1542270022187, 1542269795341, 1542269748811, 1542269694706, 1541340248314, 1541121205651, 1540397092253, 1538938310162, 1538350833202 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper484/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper484/Authors" ], [ "ICLR.cc/2019/Conference/Paper484/Authors" ], [ "ICLR.cc/2019/Conference/Paper484/Authors" ], [ "ICLR.cc/2019/Conference/Paper484/Authors" ], [ "ICLR.cc/2019/Conference/Paper484/Authors" ], [ "ICLR.cc/2019/Conference/Paper484/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper484/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper484/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper484/Authors" ], [ "(anonymous)" ] ], "structured_content_str": [ "{\"metareview\": \"The paper presents two new methods for model-agnostic interpretation of instance-wise feature importance.\", \"pros\": \"Unlike previous approaches based on the Shapley value, which had an exponential complexity in the number of features, the proposed methods have a linear-complexity when the data have a graph structure, which allows approximation based on graph-structured factorization. The proposed methods present solid technical novelty to study the important challenge of instance-wise, model-agnostic, linear-complexity interpretation of features.\", \"cons\": \"All reviewers wanted to see more extensive experimental results. Authors responded with most experiments requested. One issue raised by R3 was the need for comparing the proposed model-agnostic methods to existing model-specific methods. The proposed linear-complexity algorithm relies on the markov assumption, which some reviewers commented to be a potentially invalid assumption to make, but this does not seem to be a deal breaker since it is a relatively common assumption to make when deriving a polynomial-complexity approximation algorithm. Overall, the rebuttal addressed the reviewers' concerns well enough, leading to increased scores.\", \"verdict\": \"Accept. Solid technical novelty with convincing empirical results.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Solid technical novelty with convincing empirical results.\"}", "{\"title\": \"Response to Reviewer 3 (Details)\", \"comment\": \"1. \\u201cCoefficients in Eq. (6)\\u201d\\n\\nThe coefficients are derived from Myerson value, which can be interpreted as the Shapley value for the coalition game with a graph structure. The details can be found in the proof of Theorem 2. In particular, Equation (22) in the Appendix provides the concrete procedure of derivation.\\n\\n2. \\\"The Markovian assumption is rather strict.\\\" \\n\\nWe thank the reviewer for addressing this point. We agree with the reviewer that Markovian assumption introduces bias in explanation, which aims for a better bias-variance trade-off when approximating Shapley values on structured data. Theorem 1 and Theorem 2 quantify the introduced bias under the setting when the Markovian assumption is approximately true. We also show on real data such an approximation achieves a better bias-variance trade-off empirically when the number of model evaluations is linear in the number of features. \\n\\n3. \\\"Use other graph structures like parse trees on language.\\\" \\n\\nThe reviewer made a very bright proposal. As the current paper focuses on the study of the generic setting where data with graph structure, we only use the simplest possible model on language to demonstrate the validity of the proposed algorithms. But the proposed idea can be a promising future direction. The authors have been thinking along the same direction for a while. One question one could ask is whether there exists a better solution concept in coalitional game theory under the setting of a parse tree. Related literature includes [1] and [2] if the reviewer is interested to think about this further.\\n\\n4. \\\"Y in Eqs. (8) and (9).\\\" \\n\\nWe assume the model has the form P_m(Y|X). Y is the response variable from the model.\\n\\n5. \\u201cThe authors postulate that sampling-based methods are susceptible to high variance. Show this empirically.\\u201d\\n\\nWe have added an experiment in the updated version addressing the statistical dispersion of estimates of the Shapley value produced by sampling-based methods. Two commonly used nonparametric metrics are introduced to measure the statistical dispersion between different runs of a common sampling-based method, as the number of model evaluations is varied. Figure in the link below shows the variability of SampleShapley and KernelSHAP as a function of the number of model evaluations:\", \"https\": \"//drive.google.com/file/d/1f5yBIwxd85tyxQKB5gBlRtBX4pRe0noL/view?usp=sharing\\n\\n8. \\\"Not use superpixels as features.\\\"\\n\\nWe agree with the reviewer that using superpixels may lead to better visualization results. However, this leads to a performance decay in terms of the change in log-odds ratio when a fixed number of pixels are masked. The same issue has been addressed in [3]. For fairness of comparison, we use the raw pixels as features for all methods.\\n\\n[1] Winter, Eyal. \\\"A value for cooperative games with levels structure of cooperation.\\\" International Journal of Game Theory 18.2 (1989): 227-240.\\n[2] Faigle, Ulrich, and Walter Kern. \\\"The Shapley value for cooperative games under precedence constraints.\\\" International Journal of Game Theory 21.3 (1992): 249-266.\\n[3] Lundberg, Scott M., and Su-In Lee. \\\"A unified approach to interpreting model predictions.\\\" Advances in Neural Information Processing Systems. 2017.\"}", "{\"title\": \"Response to Reviewer 3 (Summary)\", \"comment\": \"We thank the reviewer for the detailed comments and encouraging title! We have included three experiments in the updated version to address Point 5, 6,and 7 of the reviewer\\u2019s comments, and also omit unnecessary details in the original paper. We will respond the reviewer's comments concretely below.\\n\\n\\u201cThe paper is generally well written, somewhat lengthy and at times repetitive (I would also swap 2.1 and 2.2 for better early motivation)\\u201d\\nBased on the reviewer\\u2019s request, we have shortened the paper by deleting unnecessary repetitions and details in Section 4.3 and the experiment section, and putting some of them to appendix. For example, the description of datasets is deferred to the appendix. As a replacement, we have included a new experiment with human evaluation. On the other hand, we still keep the order of 2.1 and 2.2. The main reason is that it seems more natural to explain how importance of a feature subset is quantified first (section 2.1) before we motivate the Shapley value, which incorporating interaction based on this quantification (section 2.2).\"}", "{\"title\": \"Response to Reviewer 1\", \"comment\": \"We thank the reviewer for the detailed suggestions and encouraging comments! We have included an experiment with human evaluation in the updated version. Below we respond to Reviewer 1\\u2019s questions in details.\\n\\n\\u201cIs there a way to compare against KernelSHAP using the same (human) evaluation methods from the original paper?\\u201d\\n\\nWe agree with the reviewer that human evaluation is important in this area, and we have added a new experiment with human evaluation in the updated version. \\n\\nIn KernelSHAP paper, the authors designed experiments to argue for the use of Shapley value instead of LIME, which shows Shapley value is more consistent with human intuition on a data set with only a few number of features. Both KernelSHAP and our algorithms are ways of approximating Shapley value when there is a large number of features, under which case the exact same experiment is difficult to replicate. \\n\\nWe have designed two experiments by ourselves involving human evaluation for our methods and KernelSHAP on IMDB in the updated version. We assume that the key words contain an attitude toward a movie and can be used to infer the sentiment of a review. In the first experiment, we ask humans to infer the sentiment of a review within a range of -2 to 2, given the key words selected by different model interpretation approaches. Second, we also ask humans to infer the sentiment of a review with top words being masked, where words are masked until the predicted class gets a probability score of 0.1. In both experiments, we evaluate the consistency with truth, the agreement between humans on a single review by standard deviation, and the confidence of their decision via the absolute value of the score. We observe L-Shapley and C-Shapley take the lead respectively in two experiments. See the table and an example interface in the links below, and also Section 5.3 for more details:\", \"https\": \"//drive.google.com/file/d/1_HOR28DGlKqEQVplGahv47o2xPe5lT5e/view?usp=sharing\\n\\n\\u201cIt's a little ambiguous to me whether you tried to complement other sampling/regression-based methods in your experiments or not. Can you please clarify?\\u201d\\n\\nIn the experiments, we didn't combine our approach with sampling based methods as the number of model evaluations is already small enough in the setting (linear in the number of features).\"}", "{\"title\": \"Response to Reviewer 2\", \"comment\": \"We thank the reviewer for the detailed and encouraging comments! Based on the suggestions from the reviewer, we have included an experiment in the updated version that measures the correlation between L-Shapley, C-Shapley and the Shapley value.\\n\\n\\u201cUnderstanding of the evaluation metric\\u201d:\", \"the_evaluation_metric_we_use_is_the_following\": \"log(P(y_pred | x)) - log(P(y_pred | x_{top features MASKED})). The reviewer's understanding is in general correct except that we use the predicted label instead of the true label in the data set, because we hope to find key features for why the model makes its own decision. \\n\\n\\u201cI wonder is there some way to attack the problem of distinguishing when a feature is ranked highly when its (exact) Shapley value is high versus when it is ranked highly as an artifact of the estimator?\\u201d\\n\\nWe have added a new experiment in the updated version to address the problem of how the rank of features correlates with the rank produced by the true Shapley value. We sample a subset of test data from Yahoo! Answers with 9-12 words, so that the underlying Shapley scores can be accurately computed. We employ two common metrics, Kendall's Tau and Spearman's Rho to measure the similarity (correlation) between two ranks. We have observed a high rank correlation between our algorithms and the Shapley value. See the figure in the link below, and also Appendix C for more details:\", \"https\": \"//drive.google.com/open?id=1oWsWyA4IkDIbaOjwOOwMAYJzu6kUuQSa\"}", "{\"title\": \"UPDATED: Four New Experiments Based on the Suggestions of the Reviewers\", \"comment\": \"We have added four experiments in the updated version of the paper based on the suggestions of three reviewers. The first experiment compares human evaluation on top words selected by our algorithms and KernelSHAP, and also compares human evaluation on masked reviews. The second experiment evaluates how the rank produced by our algorithms correlates with the rank of the Shapley value. The third experiment evaluates the sensitivity of our algorithms to the size of neighborhood. The last experiment empirically evaluates the statistical dispersion of sampling-based algorithms. The first experiment has been added to Section 5.3 in the main paper while the rest are added to Appendix C,D and E.\\n\\nThere are also some other minor changes addressing the concern of Reviewer 3 in the length of the paper. We have deferred the detailed description of data sets and models into the appendix. We have shortened Section 4.3 which describes the connection with related work. We also reduced the number of text examples for visualization in the appendix.\\n\\nWe again express our sincere thanks to all the reviewers, who have helped build our manuscript into a better and more complete shape!\"}", "{\"title\": \"A new method for computing Shapely values\", \"review\": \"This paper proposes two methods for instance-wise feature importance scoring, which is the task of ranking the importance of each feature in a particular example (in contrast to class-wise or overall feature importance). The approach uses Shapely values, which are a principled way of measuring the contribution of a feature, and have been previously used in feature importance ranking.\\n\\nThe difficulty with Shapely values is they are extremely (exponentially) expensive to compute, and the contribution of this paper is to provide two efficient methods of computing approximate Shapely values when there is a known structure (a graph) relating the features to each other.\\n\\nThe paper first introduces the L(ocal)-Shapely value, which arises by restricting the Shapely value to a neighbourhood of the feature of interest. The L-Shapely value is still expensive to compute for large neighbourhoods, but can be tractable for small neighbourhoods.\\n\\nThe second approximation is the C(onnected)-Shapely value, which further restricts the L-Shapely computation to only consider connected subgraphs of local neighbourhoods. The justification for restricting to connected neighbourhoods is given through a connection to the Myerson value, which is somewhat obscure to me, since I am not familiar with the relevant literature. Nonetheless, it is clear that for the graphs of interest in this paper (chains and lattices) restricting to connected neighbourhoods is a substantial savings.\", \"i_have_understood_the_scores_presented_in_figures_2_and_3_as_follows\": \"For each feature of each example, rank the features according to importance, using the plugin estimate for P(Y|X_S) where needed.\\nFor each \\\"percent of features masked\\\" compute log(P(y_true | x_{S\\\\top features})) - log(P(y_true | x)) using the plugin estimate, and average these values over the dataset.\\n\\nBased on this understanding the results are quite good. The approximate Shapely values do a much better job than their competitors of identifying highly relevant features based on this measure. The qualitative results are also quite compelling, especially on images where C-Shapely tends to select contiguous regions which is intuitively correct behavior.\\n\\nComparing the different methods in Figure 4, there is quite some variability in the features selected by using different estimators of Shapley values. I wonder is there some way to attack the problem of distinguishing when a feature is ranked highly when its (exact) Shapley value is high versus when it is ranked highly as an artifact of the estimator?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Novel methods for Shapley value estimation seem theoretically sound, could benefit from slightly more extensive evaluation\", \"review\": \"This paper provides new methods for estimating Shapley values for feature importance that include notions of locality and connectedness. The methods proposed here could be very useful for model explainability purposes, specifically in the model-agnostic case. The results seem promising, and it seems like a reasonable and theoretically sound methodology. In addition to the theoretical properties of the proposed algorithms, they do show a few quantitative and qualitative improvements over other black-box methods. They might strengthen their paper with a more thorough quantitative evaluation.\\n\\nI think the KernelSHAP paper you compare against (Lundberg & Lee 2017) does more quantitative evaluation than what\\u2019s presented here, including human judgement comparisons. Is there a way to compare against KernelSHAP using the same evaluation methods from the original paper?\\n\\nAlso, you mention throughout the paper that the L-shapley and C-shapley methods can easily complement other sampling/regression-based methods. It's a little ambiguous to me whether this was actually something you tried in your experiments or not. Can you please clarify?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}", "{\"title\": \"Nice addition to Shapley literature, but could be strengthened\", \"review\": \"The paper proposes two approximations to the Shapley value used for generating feature scores for interpretability. Both exploit a graph structure over the features by considering only subsets of neighborhoods of features (rather than all subsets). The authors give some approximation guarantees under certain Markovian assumptions on the graph. The paper concludes with experiments on text and images.\\n\\nThe paper is generally well written, albeit somewhat lengthy and at times repetitive (I would also swap 2.1 and 2.2 for better early motivation). The problem is important, and exploiting graphical structure is only natural. The authors might benefit from relating to other fields where similar problems are solved (e.g., inference in graphical models). The approximation guarantees are nice, but the assumptions may be too strict. The experimental evaluation seems valid but could be easily strengthened (see comments).\", \"comments\": \"1. The coefficients in Eq. (6) could be better explained.\\n\\n2. The theorems seem sound, but the Markovian assumption is rather strict, as it requires that a feature i has an S that \\\"separates\\\" over *all* x (in expectation). This goes against the original motivation that different examples are likely to have different explanations. When would this hold in practice?\\n\\n3. While considering chains for text is valid, the authors should consider exploring other graph structures (e.g., parsing trees).\\n\\n4. For Eqs. (8) and (9), I could not find the definition of Y. Is this also a random variable representing examples?\\n\\n5. The authors postulate that sampling-based methods are susceptible to high variance. Showing this empirically would have strengthened their claim.\\n\\n6. Can the authors empirically quantify Eqs. (8) and (9)? This might shed light as to how realistic the assumptions are.\\n\\n7. In the experiments, it would have been nice to see how performance and runtime vary with increased neighborhood sizes. This would have quantified the importance of neighborhood size and robustness to hyper-parameters.\\n\\n8. For the image experiments, since C-Shapley considers connected subsets, it is perhaps not surprising that Fig. 4 shows clusters for this method (and not others). Why did the authors not use superpixels as features? This would have also let them compare to LIME and L-Shapley.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Response: Baselines not weak; Model-specific comparison not necessary in paper, but available in the reply\", \"comment\": \"We first thank the reader for reading and greatly appreciate his/her time for writing such detailed reviews:)\\n\\nIn summary, the reader proposes two suggestions: \\n1. The current baselines, including KernelSHAP and LIME, are weak, compared to methods like 'leave-one-out'. \\n2. The authors should compare with model-specific techniques, including \\u2018integrated gradients\\u2019.\", \"the_short_reply_is\": \"1. Leave-one-out is not as strong as KernelSHAP, both theoretically and experimentally.\\n2. We do not compare with model-specific approaches in the paper as we focus on model agnostic interpretation. See the anonymous link at the end for a comparison made specifically for the reader.\", \"below_are_the_concrete_details\": \"We have different opinions on the first point (to a certain extent). In particular, KernelSHAP is stronger than 'leave-one-out':\\na. Based on the source code of KernelSHAP (https://github.com/slundberg/shap/blob/master/shap/explainers/kernel.py), KernelSHAP considers 'masking each word' when computing importance scores, as long as the number of samples is super-linear in the number of features. \\nb. Shapley value further incorporates the interaction between features when the number of samples is larger than d (the number of features), which is not the case for leave-one-out.\\nc. Experimentally, Leave-one-out is not as good as KernelSHAP when more than one features are masked in terms of the decay in log likelihood.\\n\\nSecondly, the focus of this work is on model-agnostic interpretation, and thus we did not include comparison with model-specific methods in the paper. Model-specific methods can have superior performance in some cases while suffer a performance decay in other cases: For example, Integrated Gradients can have comparable performance to L-Shapley on CNNs, but perform not as well as other methods on LSTM with comparable complexity. Comparing our methods with all model-specific methods for various models will be an unnecessary use of time and also distract readers from the focus of the paper: efficient approximations of Shapley value, as a model-agnostic method for model interpretation. Being MODEL-AGNOSTIC can be important in some practical settings where models are not specified or multiple models are used. \\n\\nNevertheless, it does no harm to compare one or two model-specific methods in the reply as suggested by the reader. The reader proposes to compare our methods with Gradient X Input, DeepLIFT and Integrated Gradients. Given the inferior performance of Gradient X Input and the complexity of implementing DeepLIFT, we only compare with Integrated Gradient on NLP tasks, where the time complexity of integrated gradients is controlled to be (approximately) the same as L-Shapley for each sample:\", \"https\": \"//drive.google.com/file/d/1UYp2lKDXt-ORgL5vKsU35K5SMa-GQSrs/view?usp=sharing\"}", "{\"comment\": \"I just wanted emphasize that the baselines used in this paper are very weak. To the best of my knowledge, no one has claimed that any of the provided baselines (LIME, KernelSHAP, or SampleSHAP) are remotely close to SOTA for, or even capable of, interpreting neural networks in the manner demonstrated here, as the original papers focused on simpler models, such as SVM, or image models with superpixel preprocessing.\\n\\nThe authors (partially) address this in the results section: \\n\\n\\\"We emphasize that our focus is model-agnostic interpretation, and we omit the comparison with interpretation methods requiring additional assumptions or specific to a certain class models, like Integrated Gradients (Sundararajan et al., 2017), DeepLIFT (Shrikumar et al., 2017), LRP (Bach et al., 2015) and LSTM-specific methods (Karpathy et al., 2015; Strobelt et al., 2018; Murdoch & Szlam, 2017).\\\"\\n\\nEven if limited to model-agnostic interpretation, a very simple, strong baseline is leave one out - black out a variable and see how much the prediction changes - which is well established in both NLP (https://arxiv.org/pdf/1612.08220.pdf) and vision (https://arxiv.org/abs/1311.2901). This method would perform significantly better than the provided baselines (the baseline examples in the bottom two rows of Figure 4 are the worst I've seen in any paper).\\n\\nI'd also argue that gradient-based methods should be compared against, such as gradient times input or integrated gradients. While not truly model-agnostic, they only require the model to be differentiable, thus apply to all neural nets, and all models considered in this paper.\\n\\nMoreover, even if not directly comparable, I'd argue that at least some model-specific techniques should be included as well, in order to see how much is lost by moving from a custom method to a model-agnostic one.\", \"title\": \"Very weak baselines\"}" ] }
HyMnYiR9Y7
DOMAIN ADAPTATION VIA DISTRIBUTION AND REPRESENTATION MATCHING: A CASE STUDY ON TRAINING DATA SELECTION VIA REINFORCEMENT LEARNING
[ "Miaofeng Liu", "Yan Song", "Hongbin Zou", "Tong Zhang" ]
Supervised models suffer from domain shifting where distribution mismatch across domains greatly affect model performance. Particularly, noise scattered in each domain has played a crucial role in representing such distribution, especially in various natural language processing (NLP) tasks. In addressing this issue, training data selection (TDS) has been proven to be a prospective way to train supervised models with higher performance and efficiency. Following the TDS methodology, in this paper, we propose a general data selection framework with representation learning and distribution matching simultaneously for domain adaptation on neural models. In doing so, we formulate TDS as a novel selection process based on a learned distribution from the input data, which is produced by a trainable selection distribution generator (SDG) that is optimized by reinforcement learning (RL). Then, the model trained by the selected data not only predicts the target domain data in a specific task, but also provides input for the value function of the RL. Experiments are conducted on three typical NLP tasks, namely, part-of-speech tagging, dependency parsing, and sentiment analysis. Results demonstrate the validity and effectiveness of our approach.
[ "domain adaptation", "training data selection", "reinforcement learning", "natural language processing" ]
https://openreview.net/pdf?id=HyMnYiR9Y7
https://openreview.net/forum?id=HyMnYiR9Y7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "rkgsNaN_Q4", "B1e_KGRllV", "SygLERUjk4", "ryxkLEtc2X", "Hye68GV93m", "S1xEN_Sfhm" ], "note_type": [ "official_comment", "meta_review", "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1548401971393, 1544770176264, 1544412717753, 1541211206670, 1541190228692, 1540671531675 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper483/Authors" ], [ "ICLR.cc/2019/Conference/Paper483/Area_Chair1" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper483/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper483/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper483/AnonReviewer3" ] ], "structured_content_str": [ "{\"title\": \"Thanks for your kind remainder\", \"comment\": \"Thank you for your information. We will carefully read the paper and refer to them in the future version.\"}", "{\"metareview\": \"This paper investigates a data selection framework for domain adaptation based on reinforcement learning.\", \"pros\": \"The paper presents an approach that can dynamically adjust the data selection strategy via reinforcement learning. More specifically, the RL agent gets reward by selecting a new sample that makes the source training data distribution closer to the target distribution, where the distribution comparison is based on the feature representations that will be used by the prediction classifier. While the use of RL for data selection is not entirely new, the specific method proposed by the paper is reasonably novel and interesting.\", \"cons\": \"The use of RL is not clearly motivated and justified (R1,R3) and the method presented in this paper is rather hard to follow might be overly complex (R1). One fair point R1 raised is more clean-cut empirical evaluation that demonstrates how RL performs clearly better than greedy optimization. The authors came back with additional analysis in Section 4.2 to address this question, but R1 feels the new analysis (e.g., Fig 3) is not clear how to interpret. A more thorough ablation study of the proposed model might have addressed the reviewer's question more clearly. In addition, all reviewers felt that baselines are not convincingly strong enough, though each reviewer pointed out somewhat different aspects of baselines. R3 is most concerned about baselines being not state-of-the-art, and the rebuttal did not address R3's concern well enough.\", \"verdict\": \"Reject. A potentially interesting idea but 2/3 reviewers share strong concerns about the empirical results and overall clarity of the paper.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"A potentially interesting idea but 2/3 reviewers share strong concerns about the empirical results and overall clarity of the paper.\"}", "{\"comment\": \"Similar work on using Reinforcement learning for sample selection published (http://www.ecmlpkdd2018.org/wp-content/uploads/2018/09/81.pdf) and need to be referred.\", \"title\": \"Similar work exists\"}", "{\"title\": \"Promising experiments, but I was confused about the method details and motivations\", \"review\": \"Response to author comments:\\n\\nUnfortunately I am still significantly unclear on why RL is useful here. The author response attempts to clarify that by pointing me to paragraph 2 of the intro, which states that RL has been used for data selection in other settings in the past. What would help me (and I believe, the paper) more is a reason why greedy selection isn't sufficient for this particular problem. Even just a single motivating example would be extremely helpful. R3 mentioned similar concerns in their review, saying that the paper lacks explanation for why RL would win over non-RL for e.g. sentiment analysis.\\n\\nLikewise, while I appreciate the authors comparing against a stronger baseline in Figure 3, I don't know how to interpret the figure. Why is Figure 3(b) better than Figure 3(c), and why does using RL cause that difference to arise?\", \"original_review\": \"Domain adaptation is an interesting task, and new methods for it would be welcome. This paper appears to have technical depth and the experimental results are promising. However, the presented approach is complex, and I found it very hard to understand -- both in terms of how exactly it works, and in terms of why the chosen techniques were chosen. More detail on my questions and confusions follows.\\n\\nFirst, I never understood the motivation for using RL here. If minimizing the distance between selected data from the source domain and data in the target domain is the objective (equation 1), how does RL help? The reward seems like it is immediate in each time step. How does the *order* in which we add source examples to our collection matter? I never understood the crucial difference that made the RL approach outperform the baselines that just select examples that minimize e.g. JS divergence. Neither the paper's discussion of motivation nor the experimental analysis clarifies this.\\n\\nThe paper says in Section 2.1 that a formal description of the representations is to follow. I didn't see this description (I do not see a formal definition of how the feature extractor works, and e.g. how it produces vectors that are *distributions* that can be used within e.g. JS divergence).\\n\\nThe paper also says it follows (Ruder and Plank, 2017) in using JS as a baseline, but as I understand that work the JS baseline is computed over words, not learned representations. What is done in the submission, is the JS baseline over words in the instances, or the representations from the feature extractor?\\n\\nWhat is the reason for partitioning the source data into disjoint \\\"data bags\\\"? Why not just select the best source domain examples (from among all the source data) using RL?\\n\\nThe experiments are generally over enough tasks and compare against several baselines, and although the empirical wins are not that large I feel that they would be sufficient for publication if not for my other concerns. The analysis (sec 4) did not make it clear to me why the RL approach works. The visualization in Figure 3 only contrasts the proposed approach with a weak baseline of selecting all source data -- what we really need to see is an analysis that reveals why the learning of a policy with RL is better than simply greedily minimizing JS for each source data selection, for up to some limit of n selections.\\n\\nMinor\\nThe paper has a number of typos\\nThe citations in the paper are mis-formatted -- seem to use shortcite where they shouldn't (e.g. \\\"scenarios Akopyan and Khashba (2017)\\\" should be \\\"scenarios (Akopyan and Khashba, 2017)\\\").\\nWhen the policy \\\\pi_w(a | s) is introduced at the start of Sec 2.2, it uses symbols (a, s) that have not been defined, also that policy variable is not really utilized in the text so it could be deleted.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}", "{\"title\": \"Interesting approach for learning domain-invariant features + dataset selection\", \"review\": \"== Originality ==\\nThe idea of matching features/representations across the source domain and target domain is an old idea, but it is executed in an interesting new way in this paper.\\n\\nIn this approach, feature representations are learned by training a neural classifier on the source domain, and an RL agent influences the feature representation by iteratively adding/removing examples from the source training data. The RL agent receives reward when the resulting feature representation causes the source domain data and target domain data to look more similar in distribution in feature space. To efficiently estimate the improvement in feature matching, a nice data bucketing strategy is used.\\n\\nThe novelty of the approach is the main strength of this paper.\\n\\n== Quality of results ==\\nThe experimental results seemed overall positive, but I felt that they could have been stronger.\\n\\nFor POS tagging, the authors don't compare against the domain adaptation methods mentioned in their related work section. Instead, they compare against Bayesian optimization using several heuristic criteria, and it was unclear where this baseline comes from. This made it hard to see whether the new approach represents a true improvement over existing techniques.\\n\\nFor dependency parsing, it appears that the proposed approach is outperformed by simply training on all of the source domain data. It would be interesting to know whether this is because feature-matching is not a good proxy for target domain performance (objective mismatch) or whether the RL system converged to a poor local optima (optimization failure).\\n\\n== Clarity ==\\nI felt that the abstract and introduction were vague in describing the main conceptual contribution.\\n\\nHowever, Section 2 (The Approach) was clearly written and I came away understanding exactly what the authors are doing.\\n\\n== Minor comments ==\\n- Algorithm 1 seems to have a typo: the definition of \\\\nabla \\\\tilde{J}(W) on the second to last line seems to be missing \\\\nabla \\\\log \\\\pi\\n- Many citations throughout the paper need to be wrapped in parentheses\\n\\n== Conclusion ==\\nThis paper presents an interesting new approach for dataset selection and learning domain-invariant representations.\", \"pros\": [\"originality of the approach\"], \"cons\": [\"Experiments could have been more convincing:\", \"should compare against at least one other state-of-the-art domain adaptation method\", \"results on dependency parsing (the most challenging task they consider) were mostly negative\", \"evaluation on other more recent multi-domain NLP tasks would have been nice (e.g. MultiNLI)\", \"Abstract and intro could provide better description of the conceptual contribution, as well as motivation\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"This paper proposes an algorithm that jointly selects training samples for domain adaptation and performs a down stream task such as classification, POS tagging, parsing etc. While the proposed model is interesting and innovative, experimental evaluations are largely lacking and insufficient to establish claims of the authors.\", \"review\": \"The paper aims to address issues with Domain Adaptation by using RL approaches. Domain Adaptation is an actively studied area in NLP research and so this paper is relevant and timely. This paper proposes and algorithm that is in line with work that aims at selecting data smartly when performing Domain Adaptation. The proposed algorithm learns representations for text in the source and target domains jointly. The proposed algorithm has two components i) a selection distribution generator (SDG) and ii) a task specific prediction for tasks being POS tagging, Dependency parsing and Sentiment Analysis.\\n\\nWhile the proposed algorithm is interesting from a RL perspective and make sense, there is no explanation provided as to why this algorithm should do better over non RL based approaches for tasks such as Sentiment Analysis.\\n\\nDomain Adaptation is widely studied for Sentiment Analysis and a lot of current research focuses on the various aspects of domain data, such as word and sentence level semantics, when developing algorithms. For example the following papers all (saving the third) address the problem of Domain Adaptation for Sentiment Analysis through various approaches fairly similar to the authors' algorithm, that provide similar if not better results than those provided in the paper,\\n\\n[1]Barnes, Jeremy, Roman Klinger, and Sabine Schulte im Walde. \\\"Projecting Embeddings for Domain Adaption: Joint Modeling of Sentiment Analysis in Diverse Domains.\\\" arXiv preprint arXiv:1806.04381 (2018).\\n[2] Ziser, Yftah, and Roi Reichart. \\\"Pivot Based Language Modeling for Improved Neural Domain Adaptation.\\\" In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), vol. 1, pp. 1241-1251. 2018.\\n[3]An, Jisun, Haewoon Kwak, and Yong-Yeol Ahn. \\\"SemAxis: A Lightweight Framework to Characterize Domain-Specific Word Semantics Beyond Sentiment.\\\" arXiv preprint arXiv:1806.05521 (2018).\\n\\nParticularly the second paper is a clear improvement over SCL (the earlier pivot based approach), a baseline that is considered by the authors in this work. There are no comparisons against this work in this paper, yet the authors compare against SCL alone.\\n\\nDue to lack of comparisons against state-of-the-art in Sentiment Analysis/Domain Adaptation for Sentiment Analysis it is hard to accept the claims made by the authors on the superiority of their algorithm. Had their paper aimed at improving over other RL based approaches for Domain Adaptation for Sentiment Analysis, some experiments could be over looked. \\n\\nBut, when making a claim that addresses the problem of Sentiment Analysis, comparisons against the state-of-the-art non RL based approaches is extremely important. Particularly, given the size of the data sets used, one could use lexical/dictionary based approaches [3] and improve upon the classification accuracies without having to train such an involved algorithm.\\n\\nFurthermore there is no qualitative analysis provided to gain insights into the behavior of the embeddings spaces of the target and source domains that are learned jointly via the proposed algorithm. At least such an analysis would have provided some insight into why the authors' RL based solution is better than a non RL based solution.\\n\\nThe lack of reference or comparisons against relevant literature is future highlighted by the seemingly relevant, yet largely dated related works section.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
r1znKiAcY7
Few-shot Classification on Graphs with Structural Regularized GCNs
[ "Shengzhong Zhang", "Ziang Zhou", "Zengfeng Huang", "Zhongyu Wei" ]
We consider the fundamental problem of semi-supervised node classification in attributed graphs with a focus on \emph{few-shot} learning. Here, we propose Structural Regularized Graph Convolutional Networks (SRGCN), novel neural network architectures extending the well-known GCN structures by stacking transposed convolutional layers for reconstruction of input features. We add a reconstruction error term in the loss function as a regularizer. Unlike standard regularization such as $L_1$ or $L_2$, which controls the model complexity by including a penalty term depends solely on parameters, our regularization function is parameterized by a trainable neural network whose structure depends on the topology of the underlying graph. The new approach effectively addresses the shortcomings of previous graph convolution-based techniques for learning classifiers in the few-shot regime and significantly improves generalization performance over original GCNs when the number of labeled samples is insufficient. Experimental studies on three challenging benchmarks demonstrate that the proposed approach has matched state-of-the-art results and can improve classification accuracies by a notable margin when there are very few examples from each class.
[ "Graph Convolutional Networks", "Few-shot", "Classification" ]
https://openreview.net/pdf?id=r1znKiAcY7
https://openreview.net/forum?id=r1znKiAcY7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "Syevq9hVgE", "rylS6OJiRm", "HJePTS9YRQ", "HyGXiScY0X", "SkghBg9YCX", "S1l0cpFtCm", "S1eCA4l927", "HkxvGahOnm", "rkli05svnm" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1545026190911, 1543334076804, 1543247294660, 1543247259419, 1543245891815, 1543245206103, 1541174486505, 1541094670528, 1541024466907 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper482/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper482/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper482/Authors" ], [ "ICLR.cc/2019/Conference/Paper482/Authors" ], [ "ICLR.cc/2019/Conference/Paper482/Authors" ], [ "ICLR.cc/2019/Conference/Paper482/Authors" ], [ "ICLR.cc/2019/Conference/Paper482/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper482/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper482/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"A new regularized graph CNN approach is proposed for semi-supervised learning on graphs. The conventional Graph CNN is concatenated with a Transposed Network, which is used to supplement the supervised loss w.r.t. the labeled part of the graph with an unsupervised loss that serves as a regularizer measuring reconstruction errors of features. While this extension performs well and was found to be interesting in general by the reviewers, the novelty of the approach (adding a reconstruction loss), the completeness of the experimental evaluation, and the presentation quality have also been questioned consistently. The paper has improved during the course of the review, but overall the AC evaluates that paper is not upto ICLR-2019 standards in its current form.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"interesting extension; but too preliminary\"}", "{\"title\": \"Paper is improved\", \"comment\": \"I appreciate the authors' efforts on revising the paper.\\n\\nNow the theory reads more convincing than before. It makes sense to use an autoencoder to restrict the parameterization through considering additionally the nodes not occurring in the supervised loss.\\n\\nExperiments are much enriched.\\n\\nI raised the score by 1 point. (There might be a line regarding how much change can be made in ICLR; I somehow feel that this paper has been edited too much, understandably because of the poor quality of the initial submission.)\\n\\nThe theory could be strengthened if section 3 is expanded with empirical illustrations of the node coverage under only the supervised loss and a shallow GCN architecture.\\n\\nStill, the writing could be further improved.\"}", "{\"title\": \"Response to Reviewer1 (Part 2/2)\", \"comment\": \"10) ##Clarification on data splits##\\nIn the standard splits, the label rate is relatively high, so for few-shot experiments, we use random splits. For each training size, we test each model on 50 random splits and report the average accuracy and the accuracy distribution. In the revision, we also report standard deviations as suggested. In order to strengthen the experimental results, we revised the experimental part. We have added MoNet as another baseline. Some of the results are as follows; and see the revised paper for more results.\\n-----------------------------------------------------------------------------------------------------------------\\n Cora\\n-----------------------------------------------------------------------------------------------------------------\\n 10 20 30 40 50\\n-----------------------------------------------------------------------------------------------------------------\\nGCN 44.69 +/- 8.62 57.84 +/- 7.89 65.42 +/- 5.86 67.74 +/- 4.56 72.64 +/- 3.47\\n-----------------------------------------------------------------------------------------------------------------\\nMoNet 43.92 +/- 8.61 56.20 +/- 7.48 62.08 +/- 5.35 65.43 +/- 4.08 70.04 +/- 3.55\\n-----------------------------------------------------------------------------------------------------------------\\nGAT 35.78 +/- 12.17 50.45 +/-12.35 59.58 +/- 8.33 62.35 +/- 5.24 67.72 +/- 4.25\\n-----------------------------------------------------------------------------------------------------------------\\nSRGCN 50.04 +/- 11.73 64.18 +/- 7.11 70.13 +/- 4.25 71.63 +/- 4.03 76.00 +/- 2.64\\n-----------------------------------------------------------------------------------------------------------------\\n\\n11)##Results on standard splits##\\nIn the revision, we also report experimental results on standard splits. In this setting, SRGCN is still better than GCN and MoNet and is only slightly inferior to GAT, while being arguably much simpler than GAT. The results are as follows (the experimental results except SRGCN are copied from previous work).\\n----------------------------------------------------------------------------\\n Cora Citeseer Pubmed\\n-----------------------------------------------------------------------------\\nGCN 81.4 +/- 0.5 70.9 +/- 0.5 79.0 +/- 0.3\\n-----------------------------------------------------------------------------\\nMoNet 81.7 +/- 0.5 --- 78.8 +/- 0.3\\n-----------------------------------------------------------------------------\\nGAT 83.0 +/- 0.7 72.5 +/- 0.7 79.0 +/- 0.3\\n-----------------------------------------------------------------------------\\nSRGCN 82.3 +/- 0.6 71.8 +/- 0.4 79.0 +/- 0.3\\n-----------------------------------------------------------------------------\\nThese new experiments confirm our previous claims that our SR regularization could improve accuracy significantly for few-shot learning. The results on standard splits shows that it is also an effective regularization method for general purpose.\"}", "{\"title\": \"Response to Reviewer1 (Part 1/2)\", \"comment\": \"We thank the reviewer for the comments and we have revised the paper thoroughly. We are sorry for causing confusion and misunderstandings on some technical issues. We make the following clarifications.\\n1) As noted by the reviewer, convolution operator in GCN is actually quite different from convolution on images. The encoder does not change the nodes nor the edges of the graph, only the features. In particular the dimensionality of the output feature space is much lower. \\n\\n2)##comment on transposed graph convolution## \\nOur main idea is to reconstruct the original features and use the reconstruction errors as a regularization, so the output feature vectors in the low-dimensional space needs to be transformed back into the original space; in particular, the dimensionality needs to be lifted to match the original space. Therefore, one can think of the transposed GCN as a GCN being reversed, which also does not change the graph, but only the features.\\nTo illustrate this, let\\u2019s consider a simple one-layer GCN as an example. Now the output of GCN is Z=\\\\sigma(\\\\hat{A} XW), where X is the input feature matrix and W is a trainable weigh matrix. To reconstruct X from Z, the transposed GCN is applied on Z: X\\u2019=\\\\sigma(\\\\hat{A}^T Z W\\u2019^T). Here \\\\hat{A} is symmetric, so the transpose operator could be omitted and W\\u2019 is another trainable weight matrix. To make the dimensionality of X\\u2019 and X the same, W and W\\u2019 must have the same size and the linear transformation W\\u2019 also needs to be transposed before applying on Z. \\n\\n3) ##Clarification on pooling##\\nThe pooling method used here follows e.g. https://arxiv.org/pdf/1806.03536.pdf, https://arxiv.org/abs/1706.02216, which is different from pooling in CNN for images. Here, we simply use the entry-wise max function: max(X, \\\\hat{A}X), which also does not change the graph, but only the features. So, it is more like an activation function and doesn\\u2019t coarsen the graph. We followed previous work and called this pooling. We are sorry for causing confusion. Now we have made the pooling more explicit in the revised paper and hope that this resolves the concern of the reviewer. \\n\\n4) In our original paper, the supervised loss in section 2.2.1 is ambiguous. So, we have revised it in the new version.\\n\\n5) We have added the definition \\\\hat A in section 2.2.2 as suggested by the reviewer. \\n\\n6) ##comment on L2 regularization##\\nBoth GCN and GAT use a L2 regularizer in all their experiments. But as demonstrated by our empirical and theoretical results (section 3 in the new version), L2 regularizer is not enough to handle few-show learning. Our main contribution is a new regularization. The experiments show that the performance of GCN + L2 + new regularizer significantly outperforms GCN + L2. So we think it is fair enough to say that our new regularizer is highly effective.\\nWe emphasize that we don\\u2019t mean to replace standard regularization methods. \\n\\n7) ##Simplified analysis##\\nWe have greatly simplified the mathematics in section 3. We reorganized the material and simplified the notations that causing much confusion before. In particular, we removed the notion of influence distribution of Xu et al., as we find that this notion is not necessary for our purpose. We think that the revised version provides a clearer and more mathematical explanation on why shallow GCN is not sufficient for few-shot learning and why standard regularization doesn\\u2019t help.\\n\\n8) ##comment on higher weight for L2 regularizer##\\nFor the weights of the regularizers, one should see that higher weight doesn\\u2019t mean the corresponding regularizer is more important. In our case, although a higher weight is given to the L2 regularizer compared with the reconstruction loss (0.0005 vs 0.0001), the overall effect of the reconstruction loss is much stronger. The main reason that the weight of the reconstruction regularizer is smaller is that the reconstruction loss is a function of all feature vectors, whose total size is n*d (n is the number of nodes and d is the number of input features). n*d is typically much larger than the number of parameters in our setting. In particular, after training, 0.0001*reconstruction loss is always much larger than 0.0005*L2.\\n\\n9) ##Hyperparameters##\\nNote the encoder and decoder of SRGCN are two symmetric GCNs and we use the same set of hyper-parameters for them, which are exactly the same as suggested by Kipf et al. The only additional parameter SRGCN has is the weight of the reconstruction loss, which is set to 0.0001 in all experiments. All the parameters were explicitly listed in section 5.1.\"}", "{\"title\": \"Response to Reviewer3\", \"comment\": \"We thank the reviewer for the constructive comments and have revised the paper significantly.\\n1) We have greatly simplified the mathematics in section 3. We reorganized the material and simplified the notations that causing much confusion before. In particular, we removed the notion of influence distribution of Xu et al., as we find that this notion is not necessary for our purpose. We think that the revised version provides a clearer and more mathematical explanation on why shallow GCN is not sufficient for few-shot learning and why standard regularization doesn\\u2019t help.\\n\\n2) We have made a thorough revision and proofreading to eliminate grammatical errors throughout the paper. We are sorry for causing much trouble in the first version. \\n\\n3) In order to strengthen the experimental results, we revised the experimental part. We have added MoNet (suggested by Reviewer 2) as another baseline and provide standard deviation of accuracy in all the experimental results as suggested. Some of the results are as follows, and the rest is in the revised paper.\\n-----------------------------------------------------------------------------------------------------------------\\n Cora\\n-----------------------------------------------------------------------------------------------------------------\\n 10 20 30 40 50\\n-----------------------------------------------------------------------------------------------------------------\\nGCN 44.69 +/- 8.62 57.84 +/- 7.89 65.42 +/- 5.86 67.74 +/- 4.56 72.64 +/- 3.47\\n-----------------------------------------------------------------------------------------------------------------\\nMoNet 43.92 +/- 8.61 56.20 +/- 7.48 62.08 +/- 5.35 65.43 +/- 4.08 70.04 +/- 3.55\\n-----------------------------------------------------------------------------------------------------------------\\nGAT 35.78 +/- 12.17 50.45 +/-12.35 59.58 +/- 8.33 62.35 +/- 5.24 67.72 +/- 4.25\\n-----------------------------------------------------------------------------------------------------------------\\nSRGCN 50.04 +/- 11.73 64.18 +/- 7.11 70.13 +/- 4.25 71.63 +/- 4.03 76.00 +/- 2.64\\n-----------------------------------------------------------------------------------------------------------------\\nIn addition, we provide the accuracy distribution of GAT and MoNet. Their box plots are shown in Section 5.2 and distribution histograms are presented in the appendix due space constraint. Furthermore, we have provided more detailed discussions on the experimental results. \\n\\n4) We also add results on standard splits. In this setting, SRGCN is still better than GCN and MoNet and is only slightly inferior to GAT, while being arguably much simpler than GAT. The results are as follows (the experimental results except SRGCN are copied from previous work) .\\n----------------------------------------------------------------------------\\n Cora Citeseer Pubmed\\n-----------------------------------------------------------------------------\\nGCN 81.4 +/- 0.5 70.9 +/- 0.5 79.0 +/- 0.3\\n-----------------------------------------------------------------------------\\nMoNet 81.7 +/- 0.5 --- 78.8 +/- 0.3\\n-----------------------------------------------------------------------------\\nGAT 83.0 +/- 0.7 72.5 +/- 0.7 79.0 +/- 0.3\\n-----------------------------------------------------------------------------\\nSRGCN 82.3 +/- 0.6 71.8 +/- 0.4 79.0 +/- 0.3\\n-----------------------------------------------------------------------------\\n\\nThese new experiments confirm our previous claims that our SR regularization could improve accuracy significantly for few-shot learning. The results on standard splits shows that it is also an effective regularization method for general purpose.\"}", "{\"title\": \"Response to Reviewer2\", \"comment\": \"Thanks for the constructive comments and feedback. We have revised the paper accordingly. Major revisions are summarized as follows.\\n1) We have added MoNet as another baseline as suggested. We have also added the standard deviation of accuracy in all the experimental results as suggested by reviewer 3. The experimental results of MoNet on task 1 on Cora and Citeseer are as follows (see our revised paper for more results on MoNet)\\n-----------------------------------------------------------------------------------------------------------------\\n Cora\\n-----------------------------------------------------------------------------------------------------------------\\n 10 20 30 40 50\\n-----------------------------------------------------------------------------------------------------------------\\nMoNet 43.92 +/- 8.61 56.20 +/- 7.48 62.08 +/- 5.35 65.43 +/- 4.08 70.04 +/- 3.55\\n-----------------------------------------------------------------------------------------------------------------\\nSRGCN 50.04 +/- 11.73 64.18 +/- 7.11 70.13 +/- 4.25 71.63 +/- 4.03 76.00 +/- 2.64\\n-----------------------------------------------------------------------------------------------------------------\\n----------------------------------------------------------------------------------------------------------------\\n Citeseer\\n-----------------------------------------------------------------------------------------------------------------\\n 10 20 30 40 50\\n-----------------------------------------------------------------------------------------------------------------\\nMoNet 37.58 +/- 7.02 45.36 +/- 7.61 54.43 +/- 6.79 57.22 +/- 5.59 59.94 +/- 4.78\\n-----------------------------------------------------------------------------------------------------------------\\nSRGCN 48.84 +/-10.76 57.99 +/- 7.09 64.04 +/- 6.47 66.60 +/- 3.00 67.72 +/- 2.17\\n-----------------------------------------------------------------------------------------------------------------\\nFrom the results, our SRGCN also outperforms MoNet by a large margin. Compared with other baselines, MoNet is generally worse than GCN in task 1 but is more competitive in task 2.\\n\\n2) We also add results on standard splits. In this setting, SRGCN is still better than GCN and MoNet and is only slightly inferior to GAT, while being arguably much simpler than GAT. The results are as follows (the experimental results except SRGCN are copied from previous work) .\\n----------------------------------------------------------------------------\\n Cora Citeseer Pubmed\\n-----------------------------------------------------------------------------\\nGCN 81.4 +/- 0.5 70.9 +/- 0.5 79.0 +/- 0.3\\n-----------------------------------------------------------------------------\\nMoNet 81.7 +/- 0.5 --- 78.8 +/- 0.3\\n-----------------------------------------------------------------------------\\nGAT 83.0 +/- 0.7 72.5 +/- 0.7 79.0 +/- 0.3\\n-----------------------------------------------------------------------------\\nSRGCN 82.3 +/- 0.6 71.8 +/- 0.4 79.0 +/- 0.3\\n-----------------------------------------------------------------------------\\nThese new experiments confirm our previous claims that our SR regularization could improve accuracy significantly for few-shot learning. The results on standard splits shows that it is also an effective regularization method for general purpose.\\n\\n3) We have greatly simplified the mathematics in section 3. We reorganized the material and simplified the notations that causing much confusion before. In particular, we removed the notion of influence distribution of Xu et al., as we find that this notion is not necessary for our purpose. We think that the revised version provides a clearer and more mathematical explanation on why shallow GCN is not sufficient for few-shot learning and why standard regularization doesn\\u2019t help.\"}", "{\"title\": \"presentation could be significantly improved, details are missing, validation is not compelling\", \"review\": \"This paper proposes to regularize the training of graph convolutional neural networks by adding a reconstruction loss to the supervised loss. Results are reported on citation benchmarks and compared for increasing number of labeled data.\\n\\nThe presentation of the paper could be significantly improved. Details of the proposed model are missing and the effects of the proposed regularization w.r.t. other regularizations are not analyzed.\\n\\nMy main concerns are related to the model design, the novelty of the approach (adding a reconstruction loss) and its experimental evaluation.\\n\\nDetails / references of the transposed convolution operation are missing (see e.g. https://ieeexplore.ieee.org/document/7742951). It is not clear what the role of the transposed convolution is in that case. It seems that the encoder does not change the nodes nor the edges of the graph, only the features, and the filters of the transposed convolution are learnt. If the operation is analogous to the transposed convolution on images, then given that the number of nodes in the graph does not change in the encoder layers (no graph coarsening operations are applied), then learning an additional convolution should be analogous (see e.g. https://arxiv.org/pdf/1603.07285.pdf Figure 4.3.). Could the authors comment on that?\\n\\nDetails on the pooling operation performed after the transposed convolution are missing (see e.g. https://arxiv.org/pdf/1805.00165.pdf, https://arxiv.org/pdf/1606.09375.pdf). Does the pooling operation coarsen the graph? if so, how is it then upsampled to match the input graph?\\n\\nFigure X in section 2.1. does no exist.\\n\\nSupervised loss in section 2.2.1 seems to disregard the sum over the nodes which have labels.\\n\\n\\\\hat A is not defined when it is introduced (in section 2.2.2), it appears later in section 2.3.\\n\\nSection 2.2.2 suggests that additional regularization (such as L2) is still required (note that the introduction outlines the proposed loss as a combination of reconstruction loss and supervised loss). An ablation study using either one of both regularizers should be performed to better understand their impact. Note that hyper-parameters chosen give higher weight to L2 regularizer.\\n\\nSection 3 introduces a bunch of definitions to presumably compare GCN against SRGCN, but those measures of influence are not reported for any model.\\n\\nExperimental validation raises some concerns. It is not clear whether standard splits for the reported datasets are used. It is not clear whether hyper-parameter tuning has been performed for baselines. Authors state \\\"the parameters of GCN and GAT and SRGCN are the same following (Kipf et al; Velickovic et al.)\\\". Note that SRGCN probably has additional parameters, due to the decoder stacked on top of the GCN. Reporting the number of parameters that each model has would provide more insights. Results are not reported following standards of running the models N times and providing mean and std. Moreover, there are no results using the full training set.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"interesting extension to GCNs, somehwat lacking a comprehensive evaluation\", \"review\": \"I appreciate the author response and additional effort to provide comparison with MoNet. I have raised my rating by 1 point. It should be noted that the edits to the revision are quite substantial and more in line of a journal revision. My understanding is that only moderate changes to the initial submission are acceptable.\\n\\n-----------------------------------------------\\n\\nThe paper introduces a new regularization approach for graph convolutional networks. A transposed GCN is appended to a regular GCN, resulting in a trainable, graph specific regularization term modelled as an additional neural network.\\n\\nExperiments demonstrate performance en par with previous work in the case where sufficient labelled data is available. The SRGCNs seem to shine when only few labelled data is available (few shot setting).\\n\\nThe method is appealing as the regularization adapts to the underlying graph structure, unlike structure-agnostic regularization such as L1.\\n\\nUnclear why the results are not compared to MoNet (Monti et al. 2017) which seems to be the current state-of-the-art for semi-supervised classification of graph nodes.\\n\\nOverall, well written paper with an interesting extension to GCN. The paper is lacking a comprehensive evaluation and comparison to latest work on graph neural networks. The results in the few shot setting are compelling.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Idea is reasonable; work is preliminary\", \"review\": \"Edited: I raised the score by 1 point after the authors revised the paper significantly.\\n\\n--------------------------------------------\\n\\nThis paper proposes a regularization approach for improving GCN when the training examples are very few. The regularization is the reconstruction loss of the node features under an autoencoder. The encoder is the usual GCN whereas the decoder is a transpose version of it.\\n\\nThe approach is reasonable because the unsupervised loss restrains GCN from being overfitted with very few unknown labels. However, this paper appears to be rushed in the last minute and more work is needed before it reaches an acceptable level.\\n\\n1. Theorem 1 is dubious and the proof is not mathematical. The result is derived based on the ignorance of the nonlinearities of the network. The authors hide the assumption of linearity in the proof rather than stating it in the theorem. Moreover, the justification of why activation functions can be ignored is handwavy and not mathematical.\\n\\n2. In Section 2.2 the authors write \\\"... framework is shown in Figure X\\\" without even showing the figure.\\n\\n3. The current experimental results may be strengthened, based on Figures 1 and 2, through showing the accuracy distribution of GAT as well and thoroughly discussing the results.\\n\\n4. There are numerous grammatical errors throughout the paper. Casual reading catches these typos: \\\"vertices which satisfies\\\", \\\"makes W be affected\\\", \\\"the some strong baseline methods\\\", \\\"a set research papers\\\", and \\\"in align with\\\". The authors are suggested to do a thorough proofreading.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
ByghKiC5YX
Greedy Attack and Gumbel Attack: Generating Adversarial Examples for Discrete Data
[ "Puyudi Yang", "Jianbo Chen", "Cho-Jui Hsieh", "Jane-Ling Wang", "Michael I. Jordan" ]
We present a probabilistic framework for studying adversarial attacks on discrete data. Based on this framework, we derive a perturbation-based method, Greedy Attack, and a scalable learning-based method, Gumbel Attack, that illustrate various tradeoffs in the design of attacks. We demonstrate the effectiveness of these methods using both quantitative metrics and human evaluation on various state-of-the-art models for text classification, including a word-based CNN, a character-based CNN and an LSTM. As an example of our results, we show that the accuracy of character-based convolutional networks drops to the level of random selection by modifying only five characters through Greedy Attack.
[ "Adversarial Examples" ]
https://openreview.net/pdf?id=ByghKiC5YX
https://openreview.net/forum?id=ByghKiC5YX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "rJlr2RR-xV", "HJgu9RRWxE", "r1gzkf3Zx4", "Ske46vsZeE", "HJl-l_1-lN", "H1xfMh9lgN", "S1xyRzcxeV", "HkxPdeCh14", "r1lwJY3tyE", "BkeaGveDJV", "Syg18fj11E", "HyeQu_ns07", "BkgBBnMApX", "ryeJ9zR3pQ", "H1x-WZR2T7", "H1e_TlAhTQ", "HyxnjDahaX", "BJxl0Dtjpm", "Hkx5iwFoam", "S1x8tPKiam", "B1lXIwYsT7", "H1exhKOmp7", "SJepNwO7pX", "BJgwIBu7am", "Skg9KAvmTm", "SkxNEow7Tm", "B1xno5Dz6X", "Hkxb5b3kpX", "SJx8hts16X", "BJekkfSq3Q", "S1gSsdb537", "HJlmF5xq27" ], "note_type": [ "official_comment", "official_comment", "official_comment", "comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_comment", "official_comment", "comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544838828985, 1544838799960, 1544827353891, 1544824764233, 1544775656778, 1544756233879, 1544753862540, 1544507503413, 1544304862947, 1544124181197, 1543643718739, 1543387243234, 1542495293108, 1542410887490, 1542410489404, 1542410432388, 1542408099676, 1542326215576, 1542326178172, 1542326142360, 1542326090778, 1541798311811, 1541797685382, 1541797199061, 1541795458029, 1541794603589, 1541728931954, 1541550472735, 1541548461554, 1541194198706, 1541179548860, 1541175930775 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper481/Authors" ], [ "ICLR.cc/2019/Conference/Paper481/Authors" ], [ "ICLR.cc/2019/Conference/Paper481/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper481/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper481/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper481/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper481/Authors" ], [ "ICLR.cc/2019/Conference/Paper481/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper481/AnonReviewer4" ], [ "ICLR.cc/2019/Conference/Paper481/Authors" ], [ "ICLR.cc/2019/Conference/Paper481/AnonReviewer4" ], [ "ICLR.cc/2019/Conference/Paper481/Authors" ], [ "~Nicholas_Carlini1" ], [ "ICLR.cc/2019/Conference/Paper481/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper481/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper481/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper481/Authors" ], [ "ICLR.cc/2019/Conference/Paper481/Authors" ], [ "ICLR.cc/2019/Conference/Paper481/Authors" ], [ "ICLR.cc/2019/Conference/Paper481/Authors" ], [ "ICLR.cc/2019/Conference/Paper481/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper481/AnonReviewer1" ], [ "~Nicholas_Carlini1" ], [ "ICLR.cc/2019/Conference/Paper481/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper481/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper481/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper481/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper481/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper481/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper481/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"Why is the topic (finding nearby errors) important? (Part 2)\", \"comment\": \"4. In terms of security, finding the error is a way to attack the model.\\n\\nFrom the security perspective, the ability of finding nearby errors (adversarial examples) leads to many security threats. Several important applications including attacking self-driving cars (slightly perturbed traffic sign can fool self-driving cars), and malicious ads (small change to ads pictures can pass the ML-based blocking or ranking models, see a nice new paper at https://arxiv.org/abs/1811.03194). \\n\\nWe believe this motivation is still valid for text data. For instance, if an attacker has a spam email to send out but the original email cannot bypass an ML-based spam filter, he or she can slightly perturb the text to bypass spam filter. The same thing can be done in many other online applications. \\n\\nOr, for instance, A sends B a message or a document, and an attacker between A, B can change a small number of bits in the message to make the ML model in B mis-classify. Number of bits could correspond to the edit distance in text data, which is one motivation for using edit distance. \\n\\n5. Is \\u201cedit distance\\u201d the best distance measurement for text adversarial example? \\n\\nWe think edit distance is a natural way as a distance measurement for text, but of course it might not be perfect. In general, even in image adversarial examples, there\\u2019s debate on which norm should be used. FGSM ( https://arxiv.org/pdf/1412.6572.pdf) aims to control the L_\\\\infty norm perturbation; C&W works for different L_p norm (https://arxiv.org/abs/1608.04644); recent papers also proposed more complicated norms such as L1-L2 norm (https://arxiv.org/abs/1709.04114), one pixel change (https://arxiv.org/abs/1710.08864), rotate/shift (https://arxiv.org/pdf/1712.02779.pdf) and semantic similarity (https://arxiv.org/pdf/1804.00499.pdf). We agree that exploiting different similarity measurement will be important, but in this paper we just focus on the most intuitive distance measurement in text and develop algorithms to find minimal adversarial perturbation. Also, we conduct human evaluation to show that minimizing edit distance works to some extent to achieve the goal that \\u201cthe semantic meaning of the sentence is not changed\\u201d.\"}", "{\"title\": \"Why is the topic (finding nearby errors) important? (Part 1)\", \"comment\": \"1. Finding the error is the first step toward fixing it.\\n\\nIn general, given a sample x with correct prediction, the model is robust if it outputs the same correct label for all the nearby points *within a small distance*. The \\u201cdefense\\u201d algorithms are proposed to improve the robustness of models, but before doing defense, it\\u2019s necessary to know *how to evaluate the robustness of a model*. \\n\\n**There is no way to improve robustness if you don\\u2019t know how to measure it.**\\n\\nTherefore, to evaluate the robustness of models, we need to *find nearby errors* for a given x. The AC might think doing random perturbation will work, but based on the experiments, simple random perturbation does not work for text (see our explanations in the \\u201cMotivation\\u201d threads) and also doesn\\u2019t work for image applications. Therefore, finding nearby errors is the first step before fixing it, so this has become an important task in our community. See the list of attacks for text data in our \\u201cMotivation\\u201d thread, and there\\u2019s much longer list of work on computer vision applications. \\n\\nMoreover, recently researchers also found that *the error with small perturbation can be used to improve robustness*. The strategy is called \\u201cadversarial training\\u201d: When training the model, we can keep finding adversarial examples and adding them into training data to train the model. Random perturbation again doesn\\u2019t work well in this case, so the state-of-the-art method now works like 1) finds nearby adversarial samples based on current batch 2) run SGD on adversarial samples. See a seminal work (https://arxiv.org/abs/1412.6572) and one of the state-of-the-art methods (https://arxiv.org/abs/1706.06083). This is another reason that we want to find adversarial examples (nearby errors). \\n\\n2. It\\u2019s not surprising that such nearby error exists. The question is how to find it. \\n\\nWe totally agree that it\\u2019s not surprising that such nearby error exists, and it\\u2019s also not our focus to show such nearby error exists. What we are doing in this paper is to propose an efficient way to find such error, which can be used to measure the robustness and identify the blind spots of the model (see our point 1).\\n\\n3. Attacks can often be reduced to an optimization problem. Does this mean attacks are trivial? \\n\\nAs the AC pointed out and we also agreed, finding adversarial example (based on edit distance) for text classification can be formulated as a discrete optimization problem. Actually this is the case for most attacks. To attack a machine learning model, we want to \\n\\nfind x\\u2019 \\\\in Ball(x, \\\\epsilon) to *maximize* Loss(f(x\\u2019), y)\\n\\nAnd this is naturally a constrained optimization problem. For image classification, x\\u2019 is in the continuous and bounded space. The seminal paper (https://arxiv.org/pdf/1312.6199.pdf) proposed to solve this by LBFGS. State-of-the-art C&W (https://arxiv.org/abs/1608.04644) attack is based on the similar formulation with different loss functions. Given this, there are still tons of papers proposing different attacks in ICLR, in both black-box and white-box settings. \\n\\nThe AC pointed out that \\u201cSince a standard greedy algorithm works, there can't be anything special about this particular optimization problem that standard methods can't handle.\\u201d So does this imply there\\u2019s no contribution to apply an existing optimization algorithm to solve an ML problem? We disagree. \\n\\nIn general, we think \\u201cshowing that an optimization algorithm works well for an ML problem\\u201d itself is an important contribution. There has been many works trying to apply existing optimization algorithms to ML models, such as SVM optimization, graphical models, sparse recovery, low-rank recovery, and these work have led to faster training/inference and many easy-to-use ML packages that were really beneficial to the community. Furthermore, when people apply some existing algorithm to some ML problem, they usually need to slightly change the algorithm to exploit the structure of problem, which is very important in practice. \\n\\nWe can see the same trend in the research of adversarial attacks, in both white-box setting (LBFGS is used initially, and then gradient descent, and then Adam), and black-box setting (coordinate descent is used initially, and then NES, genetic algorithm, etc). In our case, the algorithm is based on greedy optimization but has some treatments to make it adapt to text attack, see the discussion thread \\u201cEfficiency of Gumbel Attack; Difference and Connections in Discrete Optimization and Adversarial Attack.\\u201d Furthermore we show how to attack efficiently using the Gumbel trick, which is also an interesting finding.\"}", "{\"title\": \"Your proposed experiments are interesting!\", \"comment\": \"Wow! Thanks for proposing these interesting experiments! I personally agree with you that they are worth to investigate. Actually your thoughts point to some directions we are thinking about. I'd like to share with you some more of my personal thoughts, not necessarily related to the paper.\\n\\nYou mentioned \\\"papers which identify the minimum perturbation without much other contribution\\\" may not be interesting. I agree with you. Experiments also need to be carried out to see whether humans can make the right decision after this, how to do this efficiently in the setting of adversarial examples, etc.\", \"as_ac_said\": \"\\\" this is not surprising in the discrete case because the models considered certainly have a non-trivial amount of test error in the data distribution.\\\" Yeah, that's truth! But does it suffice to know the test error exists? No, it is not. We need to investigate what the error is, or 'what is the inherent bias' as you mentioned, more importantly, 'how to characterize that bias?' Finally, 'how to fix that?'\\n\\nFundamentally, the first thing we care about is: \\\"How do we define the bias of a model?\\\" I think maybe we can summarize in an inaccurate language: if most humans think the label of an instance is A, but the label is B, then the model has a bias. After that, we can proceed to find out \\\"what is a precise way to summarize the bias of existing models\\\". Maybe we can use math language, maybe we can summarize with other domain-specific abstraction (e.g.: texture). In the end, equipped with the knowledge, we proceed to fix the models. Perhaps one can use adversarial training? Or add some simple rules? So many potentially interesting directions there waiting for us!\\n\\nOne more thing, although you pointed me to a nice paper, I feel sad about the cat in that paper who was enforced to wear the elephant skin:)\"}", "{\"comment\": \"I'm not reviewing this paper, but I'm interested in this discussion. It sounds like you are arguing that the topic of small worst-case perturbations is interesting because lots of people find it interesting. Can you be more specific? I'm not asking about adv ex as a whole, but more specifically papers which identify the minimum perturbation without much other contribution. What are we learning from the minimum perturbation, other than there are inputs for which the model makes a mistake and we found the nearest one? You are saying the distance of the minimum perturbation isn't surprising and I agree, so then what is the point of identifying the minimum perturbation?\\n\\nI'm all for an error analysis assuming we learn something from them. Maybe for text we could identify inherent biases of the model. For example, it's probably not a coincidence that in Table 3 modifying a word associated with sentiment (better) changed the model prediction. Just speculating but perhaps if the authors reran the random replacement experiment but with a bias towards inserting words associated with sentiment they would find random replacement would be more likely to degrade the performance of the model. What happens if you randomly append a bunch of sentiment related words as the last sentence of the paragraph? Identifying such a sentiment bias could be interesting and potentially useful.\\n\\nAs a concrete example for computer vision, I really liked this paper which identifies a texture bias for computer vision models: https://openreview.net/pdf?id=Bygh9j09KX.\", \"title\": \"I don't understand either\"}", "{\"title\": \"of course minimum perturbations will be smaller than random perturbations\", \"comment\": \"I did not mean to say that it's surprising that minimum adversarial perturbations are smaller than random perturbations. My comment was meant to second Nicholas in the importance of minimum adversarial perturbations. Just like him I'd like to ask you whether your arguments against the importance of this work would also hold for any paper on adversarials, whether on text or images? For image-based adversarial attacks and defenses there have been dozens of papers this year and the topic is important to many.\"}", "{\"metareview\": \"I appreciate the willingness of the authors to engage in vigorous discussion about their paper. Although several reviewers support accepting this submission, I do not find their arguments for acceptance convincing. The paper considers automated methods for finding errors in text classification models. I believe it is valuable to study the errors our models make in order to understand when they work well and how to improve them. Crucially, in the later case, we should demonstrate how to use the errors we find to close the loop and create better models.\\n\\nA paper about techniques to find errors for text models should make a sufficiently large contribution to be accepted. I view the following hypothetical contributions as the most salient in this specific case thus my decision reduces to determining if any of these conditions have been met. A paper need not achieve all of these things, any one of them would suffice:\\n\\n1. Show that the errors found can be used to meaningfully improve the models. \\n\\nThis requires building a better model than the one probed by the method and convincingly demonstrating that it is superior in an important way that is relevant to the original goals of the application. Ideally it would also consider alternative, simpler ways to improve the models (e.g. making them larger).\\n\\n2. Show that errors are difficult to find, but that the proposed method is nonetheless capable of finding errors and that the method is non-obvious to a researcher in the field.\\n\\nThis is not applicable here because errors are extremely easy to find on the test set and from labeling more data. If we demand an automated method, then the greedy algorithm does not qualify as sufficiently non-obvious and it seems to work fine, making the Gumbel method unnecessary.\\n\\n3. Show that the particular specific errors found are qualitatively different from other errors in their implications and that they provide a unique and important insight.\\n\\nI do not believe this submission attempts to show this type of contribution. One example of this type of paper would be a paper that does a comparative study of the errors that different models make and finds something interesting (potentially yielding a path to improved models).\\n\\n4. Generate a new, more difficult/interesting, dataset by finding errors of one or more trained models\\n\\nGiven that the authors use human labelers to validate examples this is potentially another path. Here is an example of a paper using adversarial techniques in this way: https://arxiv.org/abs/1808.05326\\nHowever, I believe the paper would need to be rethought and rewritten to make this sort of contribution.\\n\\n\\nUltimately, the authors and reviews supporting acceptance must explain the contribution succinctly and convincingly. The reviewers most strongly advocating for accepting this submission seem to be saying that there is a valuable new method and probabilistic framework proposed here for finding model errors. I believe researchers in the field could have easily come up with the greedy algorithm (a standard approach to discrete optimization problems) proposed here without needing to read the paper. Furthermore, I believe the other more complicated Gumbel algorithm proposed is not necessary given the similarly effective and simpler greedy algorithm. If the authors believe that the Gumbel algorithm provides application-relevant advantages over the greedy algorithm, then they should specify how these errors will be used and rewrite the paper to make the greedy algorithm a baseline. However, I do not believe the experimental results support this idea.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Although several reviewers support accepting this submission, I do not find their arguments for acceptance convincing\"}", "{\"title\": \"I still don't understand your argument\", \"comment\": \"Are you saying that finding nearby (in edit distance) errors has specific value? When do we need this in text?\\n\\nAre you saying that we should be surprised that the minimum perturbation in the Lp ball case is smaller than randomly found perturbations? If so, I disagree that this is surprising (see the thread above with Nicholas Carlini as well as the convincing paper he linked that supports this point). I also don't think this is surprising in the discrete case because the models considered certainly have a non-trivial amount of test error in the data distribution.\"}", "{\"title\": \"We agree to disagree.\", \"comment\": \"We express our sincere thanks to Reviewer 3 for the support. We think adversarial examples on texts are interesting to study, as is explained by Reviewer 3, as well as for the reasons we explained in previous posts.\\n\\nOn the other hand, we also understand it is natural for different people to be excited about different areas, and to feel certain pieces of work less interesting. We still appreciate Reviewer 4 for reading our rebuttal.\"}", "{\"title\": \"Do the same arguments apply to vision?\", \"comment\": \"Your arguments strike me as being equally applicable to adversarial images: there, a fairly small amount of salt & pepper noise is usually sufficient to fool DNN classifiers on ImageNet. Still, there are literally dozens of publications each conference looking at this problem. I fail to see why adversarials on text are less interesting than adversarials in the image domain.\"}", "{\"title\": \"I'm then not convinced that \\\"adversarial examples\\\" in this context are interesting to study.\", \"comment\": \"\\\"First, we agree with the reviewer that adversarial attack on texts is at a relatively new stage compared to the counterpart on images.\\\"\\n\\nI brought this point up not because the methods introduced in this paper are not efficient enough at producing adversarial examples. I brought it up because I am not convinced that the distortions proposed in this paper, called adversarial examples by the authors, are significant enough for an ICLR acceptance. \\n\\n\\\"As an example, it does not make sense to reject a paper because their model achieves a lower accuracy on ImageNet than the accuracy of a very simple model on MNIST. \\\"\\n\\nI agree that these distortions on text data are not quantitatively comparable to similar distortions on image data. My concern is that finding distortions that fool text classifiers by itself is not a significant enough development. Almost all machine learning models fail to generalize to some small distortions (\\\"small\\\" as defined by distortions that would not fool humans). The authors present the distortions in this paper as especially worthy of study, as they fall under the category of \\\"adversarial examples\\\". I unfortunately do not find this convincing.\"}", "{\"title\": \"Response to Reviewer 4\", \"comment\": \"We thank the reviewer for reading our paper and giving detailed comments on our paper. However, we observe the review is posted after the deadline of paper modification has passed, and hope to address a few points of this review.\\n\\nIn short,\\n1. We think the comparison between adversarial attack on images and on texts is unfair.\\n2. Some of the questions in the review are highly correlated with AC\\u2019s questions and we have answered in our previous rebuttal. \\n\\nWe now address them in details. \\n\\nFirst, we agree with the reviewer that adversarial attack on texts is at a relatively new stage compared to the counterpart on images. However, to evaluate our work, we think it makes more sense to compare our methods with the best methods in this area, instead of with methods on a different data set. As an example, it does not make sense to reject a paper because their model achieves a lower accuracy on ImageNet than the accuracy of a very simple model on MNIST. \\nWe have shown that our method outperforms previous text adversarial attack algorithms in Figure 3, and we have even compared all methods under human evaluation in Appendix B, which indicates that humans are least sensitive to adversarial examples generated by our algorithm. So we believe our method can advance state-of-the-art in attacks on texts. \\n\\n\\u201cThe attacks on character-based models are closer to adversarial examples from this perspective.\\u201d\\nWe aim to propose a general mathematical framework to generate adversarial examples for models with discrete input. Thus, the same algorithm works for both character-based and word-based models, and could be potentially useful for other NLP models such as word-piece models. We are happy to see that the reviewer consider character-based adversarial examples more interesting.\\n\\n\\u201cThe Greedy attack is a straightforward application of greedy optimization on discrete data and is not very novel or interesting.\\u201d\\nSee our rebuttal to AC (point 2) in \\u201cEfficiency of Gumbel Attack; Difference and Connections in Discrete Optimization and Adversarial Attack.\\u201d We will also elaborate this in our final version.\\n\\nThe reviewer also proposed to include an experiment on how our attacks perform on models trained with data augmentation techniques. We agree with the reviewer that the proposed experiment can be interesting. However, given the timeline, we are not able to update the paper now. We are willing to add it in our final version.\"}", "{\"title\": \"Review\", \"review\": \"This paper introduces two new methods for generating adversarial examples for text classification models. The paper is well written, the introduced algorithms and experiments are easy to understand.\\n\\nHowever, I do not believe that these two methods are sufficiently significant. First of all, I am not convinced that the attacks can be classified as \\u201cadversarial examples\\u201d, especially the ones on the word-based models. The community originally got interested in adversarial examples because while they can easily be classified correctly by humans, they seemed to fool machine learning models with high efficiency. For example, the PGD attack by Madry et al. can reduce the accuracy of a CIFAR-10 model to 0% by using distortions that are not at all noticeable to humans. In the case of the word-based task studied here, human accuracy drops by 8-11%. \\n\\nWhile the question of whether adversarial examples are actually a security threat is under debate, the attacks on the word-based models here do not even classify as adversarial examples. Of course, it is interesting that the ML models are much less robust to these distortions than humans are, however, this is a well known problem. This paper did not perform comprehensive experiments to investigate this phenomenon. For example, they could have evaluated a wide range of distortions (including random distortions), and then check if training with all of these distortions makes the network more robust \\u2026 etc (for example, see [1]). \\n\\nThe attacks on character-based models are closer to adversarial examples from this perspective. However, the performance of the Gumbel Attack is significantly worse on character-based models than an attack as simple as the Delete-1 attack. The Greedy attack is more successful than the Delete-1 attack, however it is a straight-forward application of greedy optimization on discrete data and is not very novel or interesting. \\n\\n[1] Generalisation in humans and deep neural networks, arXiv:1808.08750\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Efficiency of Gumbel Attack; Difference and Connections in Discrete Optimization and Adversarial Attack.\", \"comment\": \"We thank the area chair for reading the reviews and our rebuttals carefully. We will answer the questions of Area Chair from the authors\\u2019 perspective.\", \"the_area_chair_proposes_two_questions\": \"1) Why do we need Gumbel as a new discrete optimization algorithm?\\n2) Have we improved \\u201cthe state-of-the-art\\u201d in discrete optimization?\\n\\nThe short reply is\\n1) Gumbel attack is efficient. The efficiency can be a practical concern in the setting of adversarial attack.\\n2) We propose better algorithms in terms of accuracy or efficiency in the regime of adversarial attack. This regime is not exactly the same as discrete optimization. \\n\\nWe address the details below. \\n1) \\n1.1 Gumbel attack is efficient both in terms of the number of model evaluations and in terms of real time. First, no model evaluation is required during the attack stage. Also, Figure 4 in the manuscript provides a comparison of real-time efficiency, which shows Gumbel attack is orders-of-magnitude faster. (Gumbel attack is around 10^-2 seconds per sample while FGSM, Delete-1 Score and other methods are between 10^-1s and 1s per sample on Yahoo! Answers.)\\n1.2 In practice, attackers may not be able to conduct many model evaluations to attack a real system.\\n1.3 It may also help design more efficient adversarial training algorithms. \\n\\n2)\\n2.1 We first address the difference between Greedy attack and standard greedy methods. \\nThe most standard greedy methods choose the first perturbation by evaluating models d * V times, where d is the length of the sentence/paragraph and V is the size of dictionary, and choose the next perturbation with complexity (d-1) *V, etc. Greedy attack follows a two-stage procedure motivated from a probabilistic framework, and takes O(d + k*V) evaluations in total (k being the number of perturbations). Moreover, Greedy attack is easier to parallelize. Given the efficiency concern of adversarial attacks, it can be more practical. \\n2.2 The area of adversarial attack is not exactly the same as discrete optimization. \\nWe formulate the problem of adversarial attack as a constrained discrete optimization problem. The true constraint here is that \\u201chumans will not change their decisions\\u201d, which we approximate by constraining the number of perturbed words. Experiments involving human subjects have been carried out to validate the effectiveness of approximation. \\n2.3 We only show the superior performance of our algorithms to algorithms in adversarial attack ([1-4]), and we do not have the intention to claim it achieves the state-of-the-art in discrete optimization. \\n\\n[1] Ji Gao, Jack Lanchantin, Mary Lou Soffa, and Yanjun Qi. Black-box generation of adversarial text sequences to evade deep learning classifiers. arXiv preprint arXiv:1801.04354, 2018.\\n[2] Jiwei Li, Will Monroe, and Dan Jurafsky. Understanding neural networks through representation erasure. arXiv preprint arXiv:1612.08220, 2016.\\n[3] Nicolas Papernot, Patrick McDaniel, Ananthram Swami, and Richard Harang. Crafting adversarial input sequences for recurrent neural networks. In Military Communications Conference, MILCOM 2016-2016 IEEE, pp. 49\\u201354. IEEE, 2016.\\n[4] Bin Liang, Hongcheng Li, Miaoqiang Su, Pan Bian, Xirong Li, and Wenchang Shi. Deep text classification can be fooled. arXiv preprint arXiv:1704.08006, 2017.\"}", "{\"comment\": \"There's actually a paper under submission that makes exactly this argument ( https://openreview.net/forum?id=S1xoy3CcYX ). I definitely agree that it should not be surprising that models have such low accuracy when you adversarially select noise to maximize classification error rate in light of this phenomenon.\\n\\nI don't think this is actually contradictory to adversarial examples as a line of research. In particular, one of the main reasons I see adversarial example work as interesting is that it gives us an estimate of the worst-case accuracy. Just like average-case accuracy is useful in many situations (and standard 'accuracy-on-test-set' measures do this for us), worst-case accuracy is also useful in other cases.\\n\\nTo relate it back to this paper's topic of discrete data (again, I haven't read the paper) a classifier for malware that worked 100% of the time on \\\"normal\\\" data would be useless if it worked 0% of the time on adversarial data---because the only data it will ever see, malware, is by definition adversarial. The same argument applies to spam, and to a lesser extent various other written text attempting to avoid detection (e.g., the recent hate speech detectors).\\n\\nJust to clarify, though: it sounds like your \\\"Why is the task important?\\\" question is generally directed at the adversarial example research as a whole. Is this right? There are, by my count, at least 60 papers under submission to ICLR this year that focus explicitly on the problem of adversarial examples (explaining their existence, approaches for generating them, and approaches for defending against them). Would you have a similar complaint about any of these other papers?\", \"title\": \"I agree with your perspective\"}", "{\"title\": \"Thanks for elaborating\", \"comment\": \"Thanks for elaborating on the motivation you have for the work. It is very helpful.\"}", "{\"title\": \"Can the reviewers please clarify the contribution(s)?\", \"comment\": \"As defined in this paper, an adversarial attack is just solving an optimization problem. For discrete sequence inputs, the paper considers a constrained discrete optimization problem. Discrete optimization is well studied and greedy algorithms for discrete optimization are also well-known and well-studied methods. They are obvious to machine learning practitioners as well. The particular greedy algorithm the authors use seems to be effective for this problem and does not require any special tricks.\", \"could_the_reviewers_especially_please_comment_on_the_following_questions\": \"1. Is the Gumbel algorithm proposed necessary here or, more generally, is a new discrete optimization algorithm needed here?\\n\\n2. The discussion section of the paper says:\\n\\\"We have proposed a probabilistic framework for generating adversarial examples on discrete data, based on which we have derived two algorithms. Greedy Attack improves the state-of-the-art across several widely-used language models, and Gumbel Attack provides a scalable method for real-time generation of adversarial examples.\\\"\\n\\nThe paper claims to improve the state of the art. Can any of the reviewers comment on whether the paper advanced the state of the art in discrete optimization? Or, more generally, how should we read the claim above? Since a standard greedy algorithm works, there can't be anything special about this particular optimization problem that standard methods can't handle.\"}", "{\"title\": \"thanks for your comment\", \"comment\": \"Thanks for weighing in, Nicolas, but I'm not sure I understand your argument.\\n\\nNeither of your first two points should surprise us when the models have substantial test error.\\n\\nTo put it another way, 50% of random sigma=.2 perturbations are misclassified for ImageNet and adversarially chosen errors can be 20x closer than these randomly found errors. Of *course* the nearest error is going to be significantly closer than randomly found errors, the nearest error is, by construction, the nearest error! Why is 20x closer unusually close in a high (~150,000) dimensional space?\"}", "{\"title\": \"UPDATED: Elaborate Section 3.1 and add new human evaluation based on the reviewers\\u2019 suggestions.\", \"comment\": \"We have elaborated the Greedy attack with a clearer presentation in Section 3.1. First, we have adopted the common notations and marked each expectation with a subscript to indicate the source of the expectation. Second, in the updated version, we have added a detailed explanation of the approximation in Equation (5, 7). To summarize, when one assumes other features are perturbed adversarially, the Greedy Attack can be interpreted as maximizing a lower bound of the original objectives.\\n\\nWe have added another experiment to compare various algorithms with human evaluation on the IMDB movie review data set. On each instance, we increase the number of words to be perturbed until the prediction of the model changes. Then we ask humans to label original texts and perturbed texts. Greedy attack yields the best performance in the experiment. Please see Appendix B of the updated version for details.\\n\\nWe again express our sincere thanks to all the reviewers, who have provided very useful suggestions for helping build our manuscript into a better shape.\"}", "{\"title\": \"Response to Reviewer 2\", \"comment\": \"We thank the reviewer for the detailed and encouraging comments!\\n\\nTo address the reviewer\\u2019s concern on Equation 5, we have added a more rigorous and detailed explanation of the approximation. Roughly, when one assumes other features are perturbed adversarially, the Greedy Attack can be interpreted as maximizing a lower bound of the original objectives. Details can be found in Section 3.1 of the updated version.\"}", "{\"title\": \"Response to Reviewer 3\", \"comment\": \"We thank the reviewer for the encouraging comments and the help in addressing the importance of the task!\\n\\nWhat\\u2019s the \\u201crandom attack\\u201d baseline in these tasks? In computer vision it\\u2019s often sufficient to add a little bit of salt-and-pepper noise or Gaussian noise to change the model decision.\\n\\nWe define \\u201crandom attack\\u201d as randomly sample k positions in the sentence, and replace them with randomly sampled words. We run random perturbation on the test set of the IMDB movie review dataset used in our paper. The average consistency of the predictions of the model from the perturbed and the original instances is 99.9% after k = 10 words are changed, and 92%, 90.4% after k = 50, 100 words are changed respectively. See the following link for a plot of comparison with our algorithms (on the first five words): https://drive.google.com/file/d/1T6UJQPz4iDFqsK9XQZ0nYv-bBcYxWraP/view?usp=sharing. \\nWe conclude that random perturbation does not work. \\n\\n\\u201cWhat the human evaluation scores would be on adversarials from other adversarial attacks?\\u201d \\n\\nWe have added another experiment to compare various algorithms with human evaluation on the IMDB movie review data set. On each instance, we increase the number of words to be perturbed until the prediction of the model changes. Then we ask humans to label original texts and perturbed texts. Greedy attack yields the best performance in the experiment. Please see Appendix B of the updated version for details.\\n\\n\\u201cAre you planning to release the code? Will it be part of CleverHans or Foolbox?\\u201d\\n\\nYes, we plan to release the code. We will either release the code in a stand-alone github repository or merge it into CleverHans.\"}", "{\"title\": \"Response to Reviewer 1\", \"comment\": \"We thanks the reviewer for the comments, and the explanations on the motivation of the task. We have improved the clarity of Section 3.1 based on the reviewer\\u2019s suggestions. Below we respond in detail to the reviewer\\u2019s comments:\\n\\n\\u201cthe Gumbel method performs poorly compared to other baselines\\u201d\\n\\nWe agree that the performance of the Gumbel method is comparable to previous methods. However, its running time is significantly shorter than all the previous methods and our Greedy Attack method (See Figure 4). Thus, Gumbel Attack is the most efficient one across all methods even after taking into account the training stage. The efficiency of generating adversarial examples is an important factor for large-scale data. \\n\\n\\\"what is causing their greedy approach to perform better\\u201d than \\u201csome gradient based adversarial attacks\\u201d?\\n\\nWhile gradient-based methods have led to several successful algorithms in the continuous domain (e.g., natural images), they have been observed to be less effective compared to discrete methods (e.g., [1]). It is mainly because gradient based methods focus on the sensitivity of response to each feature in the infinitesimal space, while perturbation is carried out in discrete space. \\n\\n\\u201cit is egregiously difficult to read in parts and is poorly written\\u201d\\n\\nWe apologize for the difficulty of reading and have addressed the problem carefully. First, we have adopted the common notations and marked each expectation with a subscript to indicate the source of the expectation. Second, in the updated version, we have added a clearer and more detailed explanation of the approximation in Equation (5, 7). To summarize, when one assumes other features are perturbed adversarially, the Greedy Attack can be interpreted as maximizing a lower bound of the original objectives.\\n\\n\\u201cThe argument about approximation to the objective by considering the i positions independently is not convincing\\u201d\\n\\nWe agree with the reviewer that this is an unnecessary assumption and have removed it from our framework (but still keep it in the design of Gumbel attack.) The independence assumption is used in Gumbel Attack for the sake of efficiency. This can be interpreted as a constraint on the search space so that decisions can be made in parallel. It can be a promising future direction to consider a framework where features are perturbed sequentially, with a termination gate [2] to control when to stop the perturbation. The latter enables the use of variable sizes of perturbation, instead of top-k perturbation.\\n\\n[1] Gao, Ji, et al. \\\"Black-box Generation of Adversarial Text Sequences to Evade Deep Learning Classifiers.\\\" arXiv preprint arXiv:1801.04354 (2018)\\n[2] Shen, Yelong, et al. \\\"Reasonet: Learning to stop reading in machine comprehension.\\\" Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 2017.\"}", "{\"title\": \"Poor mathematical exposition\", \"comment\": \"Organizationally, the paper is fine and sections are presented in a logical manner. But as I mentioned in my review, the mathematical exposition is certainly non-conventional and maybe even wrong. Their notations (Expectation symbols, conditional symbols etc.) have serious issues and I found their arguments about approximation and assumptions hard to follow. I'm not convinced about their argument of approximating their proposed schemes by considering each position independently (partly because I can't clearly follow their argument) and moreover I believe that their original non-approximated probabilistic (unclear) formulation is unnecessary because it doesn't add anything to the paper.\"}", "{\"title\": \"Adversarial examples help find systems' blind spots\", \"comment\": \"The task is important because it focuses on small perturbations to text that change the classifier decision. These changes, if undetected by humans, exhibit clear brittleness of a classifier and will aid in better design of a more robust classifier. This is the motivation for generating adversarial attacks in general: perturb the input ever so slightly in a manner that is in general undetectable to the human eye but results in drastic change in model's predictions. This has implications ranging from interpretablility and reliability of the model to security and privacy issues.\\nIn the Mturk experiments, the authors do demonstrate that a lot of the perturbations produced by their models do not change the human's decision on sentiment classification but does change the model prediction.\"}", "{\"comment\": \"[Disclaimer: I have not read the paper. This comment is solely intended to respond to the AC asking why the problem domain is important.]\", \"adversarial_example_research_papers_have_always_had_to_deal_with_this_question\": \"why is this interesting? we already know classifiers make mistakes!\", \"there_are_at_least_a_few_common_counter_arguments\": [\"Yes, models make mistakes, but they are on average quite good. The interesting property of adversarial examples is that you can take an arbitrary input, that is very clearly Class A, and make the model produce the label for Class B. You can do this even when the object in Class A is the most A-like in the entire dataset. And even if class B resembles nothing like class A. That's what makes the domain interesting.\", \"Why bother trying to find strong attacks if random noise might work? The main counter-argument here is that random noise often has to have a significantly larger distortion than adversarial noise. With Gaussian noise with sigma=0.2 on ImageNet, models still reach modest (50%+) accuracy. Adversarial noise with norm 20x smaller can reduce model accuracy to <1%.\", \"Is this actually a security problem? It depends on the situation. For a nice treatment of this question see https://arxiv.org/abs/1807.06732\"], \"title\": \"Motivating adversarial example research\"}", "{\"title\": \"I second the importance\", \"comment\": \"I'd like to second the importance of this work. Of course random perturbations at some point will also do the trick - but the same is true in computer vision applications where often small amounts of Gaussian noise lead to misclassifications. Nonetheless, many people in CV study adversarial perturbations as a means to understand what concepts network models have learnt and how susceptible they really are. Minimum adversarial perturbations are often several orders of magnitude smaller than random noise in CV, and the same seems to be true on discrete data like text.\"}", "{\"title\": \"Motivation\", \"comment\": \"Dear Area Chair and Anonymous Reader:\\n\\nThanks for your questions on the motivation of adversarial attack for discrete data. Below we briefly explain the motivation, followed by the evidence that simple random perturbation does not work. \\n\\nIn summary, the area chair and another reader posed the following questions\\uff1a\\n\\n1. Why does one need to study the phenomenon of adversarial examples on discrete data?\\n2. Why is this paper worth reading?\\n3. Do simple methods like random perturbation work on text data? \\n\\nIn short, our reply is\\n\\n1. Robustness is an important criterion for models on discrete data. The generation of adversarial examples can be used to evaluate robustness or even improve robustness.\\n2. In this paper, our goal is to propose methods with better performance (Greedy attack) or with higher efficiency (Gumbel attack).\\n3. We provide evidence that simple methods like random perturbation do not work.\", \"below_are_concrete_details\": \"Robustness is an important criterion for the application of machine learning models in critical areas such as medicine, financial markets, recommendation systems, and criminal justice. Adversarial examples have been used to evaluate the (adversarial) robustness of models (e.g., [1, 2, 5]) and have also been applied to train robust models (e.g., [3, 4]).\\n\\nThe phenomenon of adversarial examples was first found in state-of-the-art deep neural network models for classifying images (e.g., [5, 6, 2]), where small perturbations unobservable by human can easily fool neural networks. Similar to image data, the problem of adversarial perturbation on discrete data can be defined as altering the prediction of a model via minimal perturbation to an original sample (e.g., [7-14]). \\n\\nWhile there have been many pioneered and interesting papers in this area (e.g., [7-14]), we proposed Greedy attack, a method to increase the misclassification rate of a model with a comparable scale of perturbation, and Gumbel attack, a method to improve the efficiency of generating adversarial examples, (It just happens to be fashionable :) ).\\n\\nIt is natural to ask how the simplest algorithm, random perturbation, works before one is persuaded to read our paper. We compare our methods with random perturbation on the test set of the IMDB movie review dataset used in our paper. For each instance, we randomly sample k positions in the sentence, and replace them with randomly sampled words. The average consistency of the predictions of the model from the perturbed and the original instances is 99.9% after k = 10 words are changed, and 92%, 90.4% after k = 50, 100 words are changed respectively. See the following link for a plot of comparison: https://drive.google.com/file/d/1T6UJQPz4iDFqsK9XQZ0nYv-bBcYxWraP/view?usp=sharing. \\nWe conclude that random perturbation does not work. \\n\\n[1] Carlini, Nicholas, and David Wagner. \\\"Towards evaluating the robustness of neural networks.\\\" 2017 IEEE Symposium on Security and Privacy (SP). IEEE, 2017.\\n[2] Agarwal, Chirag, et al. \\\"An Explainable Adversarial Robustness Metric for Deep Learning Neural Networks.\\\" arXiv preprint arXiv:1806.01477 (2018).\\n[3] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. ICLR (2018).\\n[4] Alex Kurakin, Ian Goodfellow, Samy Bengio. Adversarial machine learning at scale. ICLR 2017. \\n[5] Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. ICLR, 2015.\\n[6] Moosavi-Dezfooli, Seyed-Mohsen, Alhussein Fawzi, and Pascal Frossard. \\\"Deepfool: a simple and accurate method to fool deep neural networks.\\\" CVPR, 2016.\\n[7] Ji Gao, Jack Lanchantin, Mary Lou Soffa, and Yanjun Qi. Black-box generation of adversarial text sequences to evade deep learning classifiers. IEEE Security and Privacy Workshops (SPW), 2018.\\n[8] Robin Jia and Percy Liang. Adversarial examples for evaluating reading comprehension systems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 2021\\u20132031, 2017.\\n[9] Bin Liang, Hongcheng Li, Miaoqiang Su, Pan Bian, Xirong Li, and Wenchang Shi. IJCAI, 2018. \\n[10] Nicolas Papernot, Patrick McDaniel, Ananthram Swami, and Richard Harang. Crafting adversarial input sequences for recurrent neural networks. In Military Communications Conference, MILCOM 2016-2016 IEEE, 2016.\\n[11] Suranjana Samanta and Sameep Mehta. Towards crafting text adversarial samples. arXiv preprint arXiv:1707.02812, 2017.\\n[12] Minhao Cheng, Jinfeng Yi, Huan Zhang, Pin-Yu Chen, and Cho-Jui Hsieh. Seq2sick: Evaluating the robustness of sequence-to-sequence models with adversarial examples. arXiv preprint arXiv:1803.01128, 2018.\\n[13] Javid Ebrahimi, Anyi Rao, Daniel Lowd, Dejing Dou. Hotflip:White-box adversarial examples for text classification. ACL, 2018. \\n[14] Jiwei Li, Will Monroe, Dan Jurafsky. Understanding neural networks through representation erasure. arXiv preprint arXiv:1612.08220, 2016.\"}", "{\"comment\": \"Random perturbations are enough to fool text classifiers so why are the authors doing this? Because Gumbel-Softmax is fashionable?\", \"title\": \"Indeed, why is this problem important?\"}", "{\"title\": \"Why is the task important?\", \"comment\": \"Can you please clarify why you say the task is important? It is very easy to generate errors for models of text. Attackers would not need the methods in this paper to produce Yelp reviews that a state-of-the-art text sentiment classifier got wrong. They would not need any knowledge of machine learning at all to find errors for these text classifiers.\"}", "{\"title\": \"writing quality is extremely important\", \"comment\": \"A poorly written manuscript is sufficient reason, by itself, to recommend rejecting a paper.\\n\\nCan you clarify how detrimental these writing problems are? Are they problems at the section and organizational level? The paragraph level in constructing clear prose? The sentence level? All of the above? Is the logical structure of the argument well-organized and easy to follow?\"}", "{\"title\": \"Important task; very poorly written\", \"review\": \"This paper addresses the problem of generating adversarial examples for discrete domains like text. They propose two simple techniques:\\n1) Greedy: two stage process- first stage involves finding the k words in the sentence/paragraph to perturb and second step changes the word in the positions identified in step 1.\\n2) Gumbel: first approach amortized over datasets where first and second steps are parametrized and learned over the dataset with the loss being the probability of flipping the decision.\\nSpecifically, for the Gumbel approach, the authors use the non-differentiable top-k-argmax output to train the module in the second step which is not ideal and it would be better to train both first and second steps jointly in an end-to-end differentiable manner.\\n\\nThe results show that Greedy approach is able to significantly affect the accuracy of the systems compared to other adversarial baselines. Mturk evaluation shows that for tasks like sentiment analysis, humans weren't as confused as the systems were when the selected words were changed which is encouraging. However, the Gumbel method performs poorly compared to other baselines.\\nMoreover, a thorough analysis of why Greedy is doing better than some gradient based adversarial attacks is needed in the paper because it is unclear what is causing their greedy approach to perform well; is it the two-stage nature of the process?\\n\\nMy major gripe with the paper is that it is egregiously difficult to read in parts and is poorly written. There are dangling conditional bars in many equations (5, 7, Greedy attack etc.), unclear \\\"expectation (E)\\\" signs and many other confusing notational choices which make the math difficult to parse. I am not even sure if those equations are correctly conveying the idea they are meant to convey. I found the algorithms to be more clearly written and realize that the text in the models and equations is unnecessarily complicated. The argument about approximation to the objective by considering the i positions independently is not convincing and their is nothing in the paper to show if the assumption is reasonable.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Exciting advance in discrete adversarial attacks\", \"review\": \"In this work the authors introduce two new state-of-the-art adversarial attacks on discrete data based on a two-stage probabilistic process: the first step identifies key features which are then replaced in the second step through choices from a dictionary.\\n\\nOverall the manuscript is very well written and easy to follow. The evaluation is extensive and contains all previous attacks I am aware of. The greedy attack outperforms all prior work by a large margin while the Gumbel attack works on par with the previous state-of-the-art while being significantly faster.\", \"i_only_have_a_few_questions_and_remarks\": [\"What\\u2019s the \\u201crandom attack\\u201d baseline in these tasks? In computer vision it\\u2019s often sufficient to add a little bit of salt-and-pepper noise or Gaussian noise to change the model decision.\", \"Another thing I am wondering is what the human evaluation scores would be on adversarials from other adversarial attacks? Adversarial attacks in general (e.g. in computer vision) can work in two ways: one being actually changing the semantic content (thus also \\u201cfooling humans) while the other changes background features / add noise to which humans are pretty insensitive (unless you add too much of it). The greedy attack does seem to change some semantics as can be seen in the increased error rate of humans (which is pretty rare for computer vision adversarials). It might be that other attacks are rather changing words or characters which are not as semantically meaningful, as would be revealed by the accompanying human scores.\", \"Are you planning to release the code? Will it be part of CleverHans or Foolbox?\", \"Overall, I find this work to be a really exciting advance on discrete adversarial attacks.\"], \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}", "{\"title\": \"Novel probabilistic framework for making adversarial attacks on deep networks with discrete valued inputs; flexible framework that allows solving the trade-off between attack success rate and computation time\", \"review\": \"The authors proposed a novel probabilistic framework to model adversarial attacks on deep networks with discrete inputs such as text. The proposed framework assumes a two step construction of an adversarial perturbation: 1) finding relevant features (or dimensions) to perturb (Eq. 3); 2) finding values to replace the features that are selected in step 1 (Eq. 4). The authors approximate some terms in these two equations to make the optimization easier. For example, it is *implicitly* assumed that given the i-th feature is removed from consideration, the probability of attack success does not change *on average* under probabilistic *adversarial* attack on other features (Eq. 5). It is not clear why that should hold and under what conditions that assumption would be reasonable (given that the attacks on other features are adversarial, although being probabilistic).\\nThe proposed framework allows one to solve the computation vs. success rate trade-off by either estimating the best attack from the network (called greedy attack Eq. 6) or using a parametric estimation that does not require model evaluation (called Gumbel attack). Experimental results suggest that Gumbel attack has better or competitive attack rate on models developed for text classification while having the most computationally efficiency among other methods. It is also noticeable that the greedy attack achieves the best success rate with a large margin among all the tested methods.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
S1x2Fj0qKQ
Whitening and Coloring Batch Transform for GANs
[ "Aliaksandr Siarohin", "Enver Sangineto", "Nicu Sebe" ]
Batch Normalization (BN) is a common technique used to speed-up and stabilize training. On the other hand, the learnable parameters of BN are commonly used in conditional Generative Adversarial Networks (cGANs) for representing class-specific information using conditional Batch Normalization (cBN). In this paper we propose to generalize both BN and cBN using a Whitening and Coloring based batch normalization. We show that our conditional Coloring can represent categorical conditioning information which largely helps the cGAN qualitative results. Moreover, we show that full-feature whitening is important in a general GAN scenario in which the training process is known to be highly unstable. We test our approach on different datasets and using different GAN networks and training protocols, showing a consistent improvement in all the tested frameworks. Our CIFAR-10 conditioned results are higher than all previous works on this dataset.
[ "Generative Adversarial Networks", "conditional GANs", "Batch Normalization" ]
https://openreview.net/pdf?id=S1x2Fj0qKQ
https://openreview.net/forum?id=S1x2Fj0qKQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "H1eOaY0JlN", "BJlOMdwL1V", "S1e0RerUJN", "HJe2HPNEyN", "HkxNyV7kJN", "H1lZnaW1yN", "HkxgtGiTC7", "BylerOc6Rm", "Byg2gu5TCX", "BJg7KNitCQ", "BJl274iK07", "rylaMQjKRQ", "r1locJstAX", "B1xUUyjKAX", "SJe3354y6m", "Bkxhog2O2Q", "SygwEwj7hQ" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544706496035, 1544087567785, 1544077525824, 1543944003913, 1543611355985, 1543605672944, 1543512696486, 1543510072369, 1543510004010, 1543251066712, 1543250979766, 1543250708894, 1543249811007, 1543249742016, 1541520052115, 1541091491673, 1540761391281 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper480/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper480/Authors" ], [ "ICLR.cc/2019/Conference/Paper480/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper480/Authors" ], [ "ICLR.cc/2019/Conference/Paper480/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper480/Authors" ], [ "ICLR.cc/2019/Conference/Paper480/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper480/Authors" ], [ "ICLR.cc/2019/Conference/Paper480/Authors" ], [ "ICLR.cc/2019/Conference/Paper480/Authors" ], [ "ICLR.cc/2019/Conference/Paper480/Authors" ], [ "ICLR.cc/2019/Conference/Paper480/Authors" ], [ "ICLR.cc/2019/Conference/Paper480/Authors" ], [ "ICLR.cc/2019/Conference/Paper480/Authors" ], [ "ICLR.cc/2019/Conference/Paper480/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper480/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper480/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"The paper addresses normalisation and conditioning of GANs. The authors propose to replace class-conditional batch norm with whitening and class-conditional coloring. Evaluation demonstrates that the method performs very well, and the ablation studies confirm the design choices. After extensive discussion, all reviewers agreed that this is a solid contribution, and the paper should be accepted.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"solid idea and results\"}", "{\"title\": \"Answer\", \"comment\": \"Q:It is not necessarily to do the real complicated network, but one-layer linear is too simple to get a convincing results.\\nI would be convinced if the authors could show either for one-layer nonlinear network, or multi-layer linear network.\", \"a\": \"A multi-layer linear network is actually equivalent in representation capacity to a one-layer network. For instance, consider a network with 2 concat layers, which (using our previous terminology) is given by:\\n\\n(1) W_2 * (W_1 * z + V_1 * one_hot(y) + b_1) + V_2 * one_hot(y) + b_2.\\n\\nEq. (1) is equivalent to:\\n\\n(W_2 * W_1) * z + (W_2 * V_1 + V_2) * one_hot(y) + (W_2 * b_1 + b_2).\\n\\nBy renaming W = W_2 * W_1; V = W_2 * V_1 + V_2 and b = W_2 * b_1 + b_2, then we obtain the same expression as before: W * z + V * one_hot(y) + b.\\n\\n---\\n\\nOne-layer nonlinear network. Consider for example concat and cBN layers and a distribution of one particular neuron with a ReLU nonlinearity.\\nAfter ReLU, the distribution becomes a Rectified Gaussian (https://en.wikipedia.org/wiki/Rectified_Gaussian_distribution), i.e.,\\nN^R(m_y, s^2) for concat and N^R(m_y, s_y^2) for cBN.\\nThe former is characterized by one class-independent and one class-specific parameter, while the latter has 2 class-specific parameters, thus the representation capacity of the latter is higher.\"}", "{\"title\": \"one-layer linear is too simple\", \"comment\": \"It is not necessarily to do the real complicated network, but one-layer linear is too simple to get a convincing results.\\n\\nI would be convinced if the authors could show either for one-layer nonlinear network, or multi-layer linear network.\"}", "{\"title\": \"Answer\", \"comment\": \"Q: I agree with the authors on one-layer linear network. However, I think multi-layer nonlinear network can not be simplified as linear operator. The generator is supposed to approximate 'any' distribution.\\nCould the authors kindly point out a reference where the generator is treated as linear operator, or the output distribution is considered Gaussian? \\nEven for one-layer nonlinear NN, the expressive power is good. Let's say we have a Gaussian variable x~N(0, 1). What is the variance for variable y = [x-\\\\mu]_{+}, which is basically shift the mean and then use ReLU?\", \"a\": \"Miyato et al. (2018) explain in the beginning of Sec. 5.1 of their paper that they use the cBN proposed in (Dumoulin et al. (2016b)).\\n\\nIn (Gulrajani et al. (2017)) cBN is not explicitly referred to in the paper but it is used in the publicly available code (https://github.com/igul222/improved_wgan_training/blob/master/gan_cifar_resnet.py#L79) for the supervised experiments (corresponding to Tab. 3-right of their paper). \\n\\nNone of the aforementioned papers directly perform ablation studies on this aspect (being not an original contribution of their method). However, for instance, Gulrajani et al. (2017) compare their method with other GANs using class-label concatenation in Tab. 3-right of their paper (which largely corresponds to Tab. 2-right of our paper). The low-ranked methods in that table (e.g., SteinGAN (Wang & Liu (2016)), DCGAN with labels (Wang & Liu (2016)), AC-GAN (Odena et al. (2016))) correspond to concatenation-based approaches.\", \"q\": \"Could the authors kindly point out to me where do (Gulrajani et al. (2017)) and (Miyato et al. (2018)) talk about G_cBN(z,y) and G_concat(z,y)?\"}", "{\"title\": \"Not convinced\", \"comment\": \"I agree with the authors on one-layer linear network. However, I think multi-layer nonlinear network can not be simplified as linear operator. The generator is supposed to approximate 'any' distribution.\\n\\nCould the authors kindly point out a reference where the generator is treated as linear operator, or the output distribution is considered Gaussian? \\n\\nEven for one-layer nonlinear NN, the expressive power is good. Let's say we have a Gaussian variable x~N(0, 1). What is the variance for variable y = [x-\\\\mu]_{+}, which is basically shift the mean and then use ReLU?\\n\\nCould the authors kindly point out to me where do (Gulrajani et al. (2017)) and (Miyato et al. (2018)) talk about G_cBN(z,y) and G_concat(z,y)?\"}", "{\"title\": \"More details on the distribution argument for cWC\", \"comment\": \"Q: I don't quite get your explanation about distribution of G_concat(z,y), G_cBN(z,y) , G_cWC(z,y). Why are the output distributions of generator Gaussian? Even though the covariance of activations are not explicitly changed when we use concatenation as condition, why is the output covariance independent of labels?\", \"a\": \"Note that our conclusions about the Gaussian distributions of G, etc. are drawn using some simplifying assumptions, such as the fact of having a simplified one-linear-layer G. We provide below a formal demonstration, and then we extend the conclusions to a deep, non-linear G.\", \"assumptions\": \"G is a one-linear-layer network, specifically: G(z) = W * z + b; and z ~ N(0, I). W is a weight matrix, z and b are vectors.\\n\\n(1) G_concat(z,y) = U * [z || one_hot(y)] + b; where U = [W -- V] and:\\none_hot(y) is the one-hot representation of the integer variable y, \\n|| denotes vertical concatenation, \\n-- denotes horizontal concatenation, \\nV is an additional parameter matrix corresponding to the one_hot(y) concatenation.\", \"hence\": \"G_concat(z,y) = W * z + V * one_hot(y) + b.\\n\\nSince we consider only linear operations and we assume that z ~ N(0, I), the output distribution will also be Gaussian. Note that the covariance matrix of the generated distribution depends only on the first term (W * z), being the latter constant with respect to z. Specifically, the covariance matrix depends on W, which is class-independent.\\n\\n(2) G_cBN(z,y) = W * (gamma_y * z + beta_y) + b, where:\\ngamma_y is a diagonal matrix whose (k,k)-th element corresponds to the gamma_y,k scaling parameter in Eq. (6),\\nbeta_y is the vector of the shifting parameters\\n\\nNote that standardization is omitted because here we are modeling only the representation capacity of the scaling-shifting parameters of cBN. \\nThe covariance matrix in this case depends on W * gamma_y. This leads to class-specific variance values but with a class-independent correlation matrix.\\n\\n(3) G_cWC(z,y) = W * (Gamma_y * z + beta_y) + b, where we omit the class-agnostic part of Eq. (7) for simplicity and:\\nGamma_y is the coloring matrix defined in Sec. 4.\\n\\nAlso in this case the covariance matrix depends on W * Gamma_y, but now Gamma_y is a full-matrix, which leads to a class-specific covariance matrix.\\n------\\n\\nSimilar arguments hold also for non-normal distributions. Moreover, it can be shown that, given a one-linear layer in a deep G, there is a hierarchy on the representation capacity among G_concat, G_cBN and G_cWC. Hence, comparing two networks G_1 and G_2 having the same architecture (i.e., same number of layers, etc.), if all the layers in G_1 have a higher representation capacity than the corresponding layers in G_2, than, overall, G_1 has a greater or equal representation capacity than G_2. \\n\\n(Gulrajani et al. (2017)) and (Miyato et al. (2018)) empirically showed that this inequality is strict, in a sense that G_cBN(z,y) is better than G_concat(z,y). In our work we empirically show that G_cWC(z,y) is better than G_cBN(z,y) (e.g., see Tab. 6).\"}", "{\"title\": \"thanks for clarification on whitening, don't understand the distribution argument for cWC\", \"comment\": \"Thanks for clarifying the whitening is not conditioned. I will remove that part from review later. Again, I think it is a tolerable trade-off, and it is not the main reason for the evaluation and score.\\n\\n\\nI don't quite get your explanation about distribution of G_concat(z,y), G_cBN(z,y) , G_cWC(z,y). Why are the output distributions of generator Gaussian? Even though the covariance of activations are not explicitly changed when we use concatenation as condition, why is the output covariance independent of labels?\"}", "{\"title\": \"Reply to the after-rebuttal comments (second part)\", \"comment\": \"Q: The motivation of cWC is still unclear. I did not find the details of cBN for class-label conditions, and how they motivated it in (Gulrajani et al. (2017) and (Miyato et al. 2018). Even if it has been used before, I would encourage the authors to restate the motivation in the paper. Saying it has been used before is an unsatisfactory answer for an unintuitive setting.\", \"a\": \"The Whitening() procedure used in the conditional case (cWC) is exactly the same procedure used in the unconditional setting (WC) and it is not conditioned by y, meaning that the covariance matrix is computed using all the batch samples, independently of the class. This is done for the reason you mention: we don't want to have covariance matrices computed with only m/n samples, which may be unstable. We will clarify this in the paper. It is not completely clear to us what you mean with \\\"mismatch\\\". If you mean that feature separation is performed only by CondColoring(), that is true: Whitening() is identical in both WC and cWC. However we believe that this is a good trade-off, being the (Gamma_y, Beta_y) parameters in CondColoring() enough to perform feature separation (please, see also our previous answer). Note that also in cBN, the standardization part is unconditioned.\", \"q\": \"Another less important comment is that it is still hard to say how much benefits we get from the more learnable parameters in WC than BN. It is probably not so important because it can be a good trade-off for state-of-the-art results. In table 3 for unconditioned generation, it looks like the benefits come a lot from the larger parameter space. For conditioned generation in table 6, I am not sure if whitening is conditioned or not, which makes it less reliable to me. If whitening is conditioned, then the samples in each minibatches may not be enough to get a stable whitening. If whitening is unconditioned, then there seems to be a mismatch between whitening and coloring.\"}", "{\"title\": \"Reply to the after-rebuttal comments (first part)\", \"comment\": \"Thank you for appreciating our response and our WC proposal. Below our answers to your feedback.\", \"q\": \"The motivation of WC for GAN is still unclear. WC is general extension of BN, and a simplified version has been shown to be effective for discrimination in Huang 2018. I understand the empirically good performance for GAN. But I am not convinced why WC is particularly effective for GAN, comparing to discrimination. The smoothness explanation of BN applies to both GAN and discrimination. I actually think it may be nontrivial to extend the smoothness argument from BN to WC.\", \"a\": \"We agree that the need of loss smoothness applies to both GAN and discriminative networks. However, the reason for which in an adversarial setting stability is more important is related to fact that GAN optimization aims at finding a Nash equilibrium between two players, a problem which is more difficult and less stable than the common discriminative-network optimization and that frequently leads to non-convergence.\\nWe plan to provide an empirical evidence that the GAN loss is smoother using our WC than when using BN in a Journal extension of this paper following the protocol suggested by Odena et al.(2018).\"}", "{\"title\": \"Response to Reviewer #2 (Part 2)\", \"comment\": \"Q: In Table 3, std-C is better than WC-diag, which indicates coloring is more important. In Table 6, cWC-diag is better than c-std-C, which indicates whitening is more important. Why?\", \"a\": \"Following your suggestion, we are training our WC + SN + ResNet + Proj. Discr. on ImageNet. Up to now the complete training is not yet finished because the dataset is very large and the basic network (based on the SN + ResNet + Proj. Discr. approach) used in (Miyato et al. (2018)) for a similar experiment is very big (i.e., too big to be fitted in a single GPU). We will have final results at camera-ready time. However, we have already almost reached the performance of (Miyato et al. (2018)) with much less iterations and with a much smaller capacity network. Specifically:\\n\\n- At 100k iterations, ours (WC + SN + ResNet + Proj. Discr) has an IS value of 21.4, while (Miyato et al. (2018)) is 17.5.\\n- At 200k iterations, ours is 26.12, while (Miyato et al. (2018)) is 21.\\n- At 300k iterations, ours is 29.19, while (Miyato et al. (2018)) is 23.5.\\n\\n(Miyato et al. (2018)) report results at 450k iterations. Note that with only 2/3 iterations we are already almost on par with their results: \\n- At 300k iterations, ours is 29.19, while at 450k iterations (Miyato et al. (2018)) is 29.5.\\n\\nMoreover, note that, in order to fit the basic network on a single GPU, we had to reduce its capacity, decreasing the number of 3X3 convolutional filters. All in all, and considering our additional coloring filters, our generator has 6M parameters while the generator of (Miyato et al. (2018)) has 45M parameters. The discriminator is the same for both: 39M.\\nThus this preliminary results show that with 2/3 iterations and with a generator with overall only 13% of the parameters of the generator used in (Miyato et al. (2018)) we are almost on par on a big dataset like ImageNet. We will add this experiment in the final version of the paper, which we believe it is important also to emphasize that the advantage of our method does not depend on the increase of the parameters due to the coloring layers.\", \"q\": \"Having ImageNet results will be a big support for the paper.\"}", "{\"title\": \"Response to Reviewer #2 (Part 1)\", \"comment\": \"Thank you for your detailed review. Below our answers.\", \"q\": \"The authors claim progressive GAN used a larger generator to achieve a better performance than WC. The WC layer is generally larger than BN layer and has more learnable parameters. Could the authors compare the number of parameter of generator in BN-ResNet, WC-ResNet, and progressive GAN?\", \"a\": \"You are right: both WC and cWC have more learnable weights, due to the coloring step. However, note that our performance boost does not depend only on the increase of the convolutional-filter number. For instance, if you compare c-std-C and c-std-C_{sa} with the corresponding whitening-based versions cWC and cWC_{sa}, having exactly the same number of parameters, you see that the latters drastically outperform the former ones (Tab. 6). We emphasized this in the new paper at the end of Sec. 5.2.1. Please, see also below our answer about the ImageNet experiment.\", \"the_number_of_overall_parameters_used_in_the_architectures_you_mentioned_are\": \"BN-ResNet (called SN + ResNet + Proj. Discr. in the paper). G: 4.3M; D: 1M. \\nWC-ResNet (called WC + SN + ResNet + Proj. Discr. in the paper). G: 4.7M; D: 1M.\\nProgressive GAN. G: 18.9M; D: 18.9M.\\n\\nwhere G indicates the Generator and D the Discriminator.\"}", "{\"title\": \"Response to Reviewer #1\", \"comment\": \"Thank you for your review. Below our answer.\", \"q\": \"My only concern is that the proposed WC algorithm seems to have capability of applying to many tasks including discriminative scenario. This paper seems to have potential to be a more general paper about the WC method. Why just consider GAN? What is the performance of WC compared with BN/ZCA whiten in other tasks. It would be better if the authors can elaborate the motivation of choosing GAN as the application.\", \"a\": \"Please, note that in the ex-Appendix D (which is now Sec. 6) we actually compare WC with both BN and DBN (proposed in (Huang et al. (2018)) and based on the ZCA whitening) in a discriminative scenario using the protocol suggested in (Huang et al. (2018)). The results reported in the ex-Tab. 10 (now Tab. 7) show that both WC and DBN achieve lower errors than BN. Moreover, WC is only slightly worse than DBN (e.g., using a ResNet-32, WC has a 0.0006 error rate higher than DBN). However, note also that the maximum error rate difference using the protocol suggested in (Huang et al. (2018)) over all the tested normalization techniques, is lower than 0.01. This probably shows that in a discriminative scenario, replacing standardization (BN) with full-feature whitening (e.g., using our WC or DBN) is much less useful than in a GAN scenario, in which we got results drastically different when using WC/cWC with respect to BN/cBN.\\nFrom this empirical analysis we conclude that full-feature whitening shows its major application potential in a GAN setting. The reason behind this different behaviour (marginal accuracy boost in a discriminative scenario vs. large boost in a GAN setting) is probably due to the higher instability of GAN training with respect to discriminative networks. Indeed, as mentioned in Sec. 1, recent papers (Santurkar et al. (2018); Kohler et al. (2018)) show that the main reason of the success of BN relies on an improved training stability. As a consequence, our extension of feature standardization to feature whitening goes in the direction of further improving this stability, which is much more important for GANs than for discriminative networks (please, see also our first answer to Reviewer #3).\\nIn the new version of the paper we have moved Appendix D to Sec. 6 and we have added a discussion at the end of that section which summarizes the above analysis.\\nFinally, note that a second, important motivation of our work, for conditional GANs, is that our cWC can represent class-specific information using more informative filters (Sec. 1), and this second aspect is naturally related to a GAN-based application.\"}", "{\"title\": \"Response to Reviewer #3 (part 2)\", \"comment\": \"Q: It is not clear why the proposed method is much faster than ZCA-based whitening.\", \"a\": \"This remark is not clear to us. Of course we cannot be aware of concurrent ICLR submissions and we think that experimental comparisons at submission time should be done with only already published papers. However, checking the arXiv version of the paper you mentioned, we noticed that the authors report exactly the same IS we got (IS = 9.06).\", \"q\": \"\\\"Our CIFAR-10 supervised results are higher than all previous works on this dataset.\\\" These are somewhat overstated. 8.66 unconditioned IS looks good; however, conditioned IS equal to or better than 9.06 have been reported, e.g. in a concurrent ICLR submission - \\\"Learning Neural Random Fields with Inclusive Auxiliary Generators\\\".\"}", "{\"title\": \"Response to Reviewer #3 (part 1)\", \"comment\": \"Thank you for your review. Below our answers.\", \"q\": \"It is not clear why WC performs better than W_zca C (Table 3), though the improvement is moderate. The difference is that WC uses Cholesky decomposition and ZCA uses eigenvalue decomposition. Compared to W_zca C, WC seems to be an incremental contribution.\", \"a\": \"Note that W_{zca}C is not the Decorrelated Batch Normalization (DBN) proposed in (Huang et al. (2018)) because, for instance, no coloring is used in DBN. Hence W_{zca}C is a variant of our WC in which the whitening phase is performed using ZCA. This is explained in the last lines of Sec. 5.1.1.\\n\\nConcerning the reason why WC performs better than W_{zca}C, we believe this is due to the higher stability of the Cholesky decomposition with respect to the Singular Value Decomposition (SVD) used in the ZCA-whitening. Specifically, in the following we refer to Appendix A.2 of (Huang et al. (2018)), where the backpropagation formulas used for the proposed ZCA-based whitening are presented (we used the same backpropagation in our W_{zca}C experiments). The gradient of the loss with respect to the covariance matrix depends on a matrix called K in Eq. (A.11) of that Appendix. The (i,j)-th element of K is (by definition) inversely proportional to the difference of the (i,j)-th singular values (1/ (sigma_i - sigma_j)) of the covariance matrix. As a consequence, if some of the singular values are identical or very close to each other, then computing K_i,j is ill-conditioned.\\nWhat we empirically observed is that W_{zca}C may be highly unstable and training may start to drastically deteriorate after some iterations. Indeed, the results reported in Tab. 3 refer to the *best* IS-FID values observed during training. After about 40k iterations, \\nW_{zca}C suddenly degenerated, collapsing to a model that always produces a constant, uniform grey image. To show this phenomenon, we repeated training of both W_{zca}C and WC and we added a new figure (Fig. 4) in the new paper (please, see the new Appendix D, in which we discuss about W_{zca}C instability issues). The new Fig. 4 shows different IS/training-iteration curves corresponding to both WC and W_{zca}C. As you can see, the W_{zca}C training behaviour may drastically degenerate at some point. Conversely, our WC (and cWC) has never showed these drastic instability phenomena.\"}", "{\"title\": \"Interesting, but there are some unclear issues\", \"review\": \"This paper proposes to generalize both BN and cBN using Whitening and Coloring based batch normalization. Whitening is an enhanced version of mean subtraction and normalization by standard deviation. Coloring is an enhanced version of per-dimension scaling and shifting.\\nEvaluation experiments are conducted on different datasets and using different GAN networks and training protocols. Empirical results show improvements over BN and cBN.\\n\\nThe proposed method WC is interesting, but there are some unclear issues.\\n\\n1. Two motivations for this paper: BN improves the conditioning of the Jacobian, stability of GAN training is related to the conditioning of the Jocobian. These motivate the paper to develop enhanced versions of BN/cBN, as said in the introduction. More discussions why WC can further improve the conditioning over ordinary BN would be better.\\n\\n2. It is not clear why WC performs better than W_zca C (Table 3), though the improvement is moderate. The difference is that WC uses Cholesky decomposition and ZCA uses eigenvalue decomposition. Compared to W_zca C, WC seems to be an incremental contribution.\\n\\n3. It is not clear why the proposed method is much faster than ZCA-based whitening.\\n\\n=========== comments after reading response ===========\\n\\nThe authors make a good response, which clarifies the unclear issues from my first review. I remove the mention of the concurrent submission.\\n\\nSpecially, the new Appendix D with the new Fig. 4 clearly explains and shows the benefit of WC over W_zca.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}", "{\"title\": \"Interesting idea, Convince Results\", \"review\": \"This paper tends to address the instability problem in GAN training by replacing batch normalization(BN) with whitening and coloring transform(WC) to provide a full-feature decorrelation. This paper consider both uncondition and condition cases.\\nIn general, the idea of replacing BN with WC is interesting and well motivated. \\n\\nThe proposed method looks novel to me. Compared with ZCA whitening in Huang et al. 2018, the Cholesky decomposition is much faster and performs better. The experiments show the promising results and demonstrate the proposed method is easily to integrate with other advanced technic. The experimental results also illustrate the role of each components and well supports the motivation of proposed method.\\n\\nMy only concern is that the proposed WC algorithm seems to have capability of applying to many tasks including discriminative scenario. This paper seems to have potential to be a more general paper about the WC method. Why just consider GAN? What is the performance of WC compared with BN/ZCA whiten in other tasks. It would be better if the authors can elaborate the motivation of choosing GAN as the application.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Good results, motivation unclear for GAN\", \"review\": [\"This paper proposed Whitening and Coloring (WC) transform to replace batch normalization (BN) in generators for GAN. WC generalize BN by normalizing features with decorrelating (whitening) matrix, and then denormalizing (coloring) features by learnable weights. The main advantage of WC is that it exploits the full correlation matrix of features, while BN only considers the diagonal. WC is differentiable and is only 1.32x slower than BN. The authors also apply conditional WC, which learn the parameters of coloring conditioned on labels, to conditional image generation. Experimental results show WC achieves better inception score and FI distance comparing to BN on CIFAR-10, CIFAR-100, STL-10 and Tiny Imagenet. Furthermore, the conditional image generation results by WC are better than all previous methods.\", \"I have some detailed comments below.\", \"The paper is well written, and I generally enjoyed reading the paper.\", \"The experimental results look sufficient, and I appreciate the ablation study sections.\", \"The score on supervised CIFAR-10 is better than previous methods.\", \"The main text is longer than expectation. I would suggest shorten section 3.1 Cholesky decomposition, section 4 conditional color transformation and the text in section 5 experiments.\", \"The proposed WC transform is general. It is a bit unclear why it is particularly effective for generator in GAN. Exploiting the full correlation matrix sounds reasonable, but it may also introduce unstability. It would help if the authors have an intuitive way to show that whitening is better than normalization.\", \"It is unclear why conditional WC can be used for generation conditioned on class labels. In Dumoulin 2016, conditional instance normalization is used for generating images conditioned on styles. As image styles are described by Gram matrix (correlation) of features, changing first order and second order statistics of features is reasonable for image generation conditioned on styles. I cannot understand why conditional WC can be used for generation conditioned on class labels. I would like the authors to carefully explain the motivation, and also provide visual results like using the same random noise as input, but only changing the class conditions.\", \"It is unclear to me why the proposed whitening based on Cholesky decomposition is better than ZCA-based in Huang 2018. Specifically, could the authors explain why WC is better than W_{aca}C in Table 3?\", \"The authors claim progressive GAN used a larger generator to achieve a better performance than WC. The WC layer is generally larger than BN layer and has more learnable parameters. Could the authors compare the number of parameter of generator in BN-ResNet, WC-ResNet, and progressive GAN?\", \"In Table 3, std-C is better than WC-diag, which indicates coloring is more important. In Table 6, cWC-diag is better than c-std-C, which indicates whitening is more important. Why?\", \"What is the batch size used for training? For conditional WC, do the samples in each minibatch have same label?\", \"Having ImageNet results will be a big support for the paper.\", \"=========== comments after reading rebuttal ===========\", \"I appreciate the authors' feedback. I raised my score for Fig 7 showing the conditional images, and for experiments on ImageNet.\", \"I think WC is a reasonable extension to BN, and I generally like the extensive experiments. However, the paper is still borderline to me for the following concerns.\", \"I strongly encourage the authors to shorten the paper to the recommended 8-page.\", \"The motivation of WC for GAN is still unclear. WC is general extension of BN, and a simplified version has been shown to be effective for discrimination in Huang 2018. I understand the empirically good performance for GAN. But I am not convinced why WC is particularly effective for GAN, comparing to discrimination. The smoothness explanation of BN applies to both GAN and discrimination. I actually think it may be nontrivial to extend the smoothness argument from BN to WC.\", \"The motivation of cWC is still unclear. I did not find the details of cBN for class-label conditions, and how they motivated it in (Gulrajani et al. (2017) and (Miyato et al. 2018). Even if it has been used before, I would encourage the authors to restate the motivation in the paper. Saying it has been used before is an unsatisfactory answer for an unintuitive setting.\", \"Another less important comment is that it is still hard to say how much benefits we get from the more learnable parameters in WC than BN. It is probably not so important because it can be a good trade-off for state-of-the-art results. In table 3 for unconditioned generation, it looks like the benefits come a lot from the larger parameter space. For conditioned generation in table 6, I am not sure if whitening is conditioned or not, which makes it less reliable to me. If whitening is conditioned, then the samples in each minibatches may not be enough to get a stable whitening. If whitening is unconditioned, then there seems to be a mismatch between whitening and coloring.\", \"====== second round after rebuttal =============\", \"I raise the score again for the commitment of shortening the paper and the detailed response from the authors. That being said, I am not fully convinced about motivations for WC and cWC.\", \"GAN training is more difficult and unstable, but that does not explain why WC is particularly effective for GAN training.\", \"I have never seen papers saying cBN/cWC is better than other conditional generator conditioned on class labels. I think the capacity argument is interesting, but I am not sure if it applies to convolutional net (where the mean and variance of a channel is used), or how well it can explain the performance because neural nets are overparameterized in general. I would encourage authors to include these discussions in the paper.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
Sk4jFoA9K7
PeerNets: Exploiting Peer Wisdom Against Adversarial Attacks
[ "Jan Svoboda", "Jonathan Masci", "Federico Monti", "Michael Bronstein", "Leonidas Guibas" ]
Deep learning systems have become ubiquitous in many aspects of our lives. Unfortunately, it has been shown that such systems are vulnerable to adversarial attacks, making them prone to potential unlawful uses. Designing deep neural networks that are robust to adversarial attacks is a fundamental step in making such systems safer and deployable in a broader variety of applications (e.g. autonomous driving), but more importantly is a necessary step to design novel and more advanced architectures built on new computational paradigms rather than marginally building on the existing ones. In this paper we introduce PeerNets, a novel family of convolutional networks alternating classical Euclidean convolutions with graph convolutions to harness information from a graph of peer samples. This results in a form of non-local forward propagation in the model, where latent features are conditioned on the global structure induced by the graph, that is up to 3 times more robust to a variety of white- and black-box adversarial attacks compared to conventional architectures with almost no drop in accuracy.
[ "peernet", "peernets", "graph", "geometric deep learning", "adversarial", "perturbation", "defense", "peer regularization" ]
https://openreview.net/pdf?id=Sk4jFoA9K7
https://openreview.net/forum?id=Sk4jFoA9K7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "Syx_k8PHl4", "HylOU1LpAQ", "SJeLIho2C7", "Byg8yHtZRX", "HklyONYZAX", "Bkecy4F-0Q", "B1lhSkeq67", "BkxmCEYpnX", "HygvKNtThX", "HkgMKRcth7", "rJxejaXmhQ", "H1lwBrml27", "H1x6i6fgnm" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_comment", "official_comment", "official_review", "official_review", "comment", "comment" ], "note_created": [ 1545070047851, 1543491408482, 1543449678028, 1542718685643, 1542718567063, 1542718433564, 1542221635841, 1541407947109, 1541407870571, 1541152377520, 1540730263735, 1540531518696, 1540529572646 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper479/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper479/Authors" ], [ "ICLR.cc/2019/Conference/Paper479/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper479/Authors" ], [ "ICLR.cc/2019/Conference/Paper479/Authors" ], [ "ICLR.cc/2019/Conference/Paper479/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper479/Authors" ], [ "ICLR.cc/2019/Conference/Paper479/Authors" ], [ "ICLR.cc/2019/Conference/Paper479/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper479/AnonReviewer2" ], [ "(anonymous)" ], [ "(anonymous)" ] ], "structured_content_str": [ "{\"metareview\": \"The paper presents a novel with compelling experiments. Good paper, accept.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Accept\"}", "{\"title\": \"Answering other concerns\", \"comment\": \"We apologize to the reviewer for our brief response. We have tried to address what we believed to be the most prominent concern, thus inadvertently overseeing some other important comments.\\n\\nPlease find answers to the additional concerns below.\\n\\n1. I would have liked to see a response regarding my point on batch sizes? Is that a problem at all in your approach.\\n---\\nAs we mention in our paper, we used somewhat smaller batch sizes than traditional CNNs, primarily due to the current memory requirements and non-optimized implementation of our code. \\nThe attention weights do not depend on number of peers in the graph and we can therefore change batch size dynamically as we want. We have tried batch sizes as small as 8 samples without seeing dramatic decrease in performance. \\nDuring test time, we have tried different number of peers in the graph - 10,50,100,200,500. We did not observe any significant changes in performance, please refer to some additional results in Table 4 of Appendix C.\\n\\n2. What conclusions can we draw from MNIST and CIFAR for real-world applications?\\n---\\nWe believe that key novelty of our work is a deep-learning network architecture where we can interleave graph layers with convolutional layers and learn everything end-to-end. To our knowledge, this is the first such architecture, and it has many potential use-cases, e.g. image inpainting, few-shot learning, adversarial defense, etc. We have chosen to showcase our architecture on a very hot topic of adversarial attack defense.\\nIt should be noted that a vast majority of the defense mechanisms described in the literature or available online do not show any results on ImageNet but rather smaller benchmarks such as CIFAR. We have therefore focused on such datasets, in order to provide if not direct comparison, at least a ballpark compared to the current state-of-the-art.\\nWe agree with the reviewer that it is hard to draw conclusions regarding real-world applications of our approach from the datasets we have tested on. We however firmly believe that extending our approach with ideas mentioned in our comment on scalability should make our method a good fit for real-world scenarios.\"}", "{\"title\": \"please elaborate on all concerns\", \"comment\": \"I am a bit disappointed by the author's response which only picks up one of my points raised in the review.\\n\\n1) I would have liked to see a response regarding my point on batch sizes? Is that a problem at all in your approach?\\n\\n2) What conclusions can we draw from MNIST and CIFAR for real-world applications?\"}", "{\"title\": \"Additional evaluation suggests the defense does not cause gradient masking\", \"comment\": \"Thank you for your comments.\\n\\nBased on your concerns, we below provide additional evaluation to show that our approach does not cause gradient masking. We also add the new results to the supplementary material of our paper.\\n\\n1. It appears this paper is causing gradient masking. Looking at Figure 5, FGSM is a more effective attack than PGD at eps=0.1, which is highly suspicious and is listed as one of the tests for identifying gradient masking in Athalye et al. 2018.\\n---\\nRegarding your concern to the Figure 5, where FGSM seems more effective than PGD at eps=0.1. We attribute this to the selection of the hyperparameters of the FGSM and PGD attack. The number of iterations per each value of epsilon for the PGD attack was set to 40. We have tried to increase this parameters to 100 iterations and evaluate on a small subset of the test set. This causes the PGD to find a slightly better perturbations in many cases, and slightly improves the attack on PeerNet. The overall comparison PeerNet vs. CNN baseline however does not change significantly, and we therefore choose to leave the default configuration of 40 iterations, as it is much more computationally feasible for the full test set of 10000 samples.\\n\\n2. The authors may wish to try black-box attacks (e.g., SPSA by Uesato et al at ICML'18) to see if this is happening.\\n---\\nFurther, to support our claim, we have taken this repository implementing black box attacks, including SPSA mentioned by the reviewer https://github.com/sunblaze-ucb/blackbox-attacks .\\nWe have used their code for cifar-10 dataset and performed evaluation on a subset of 100 test samples. Using untargetted both single-step and iterative query black-box attack computed using finite difference method, compared to CW-loss based white-box attack.\\nThe code reports \\\"fraction of targtets achieved\\\" for both white-box (whitebox_succ) and black-box (blackbox_succ) attacks, which is the percentage of samples that the attack was successful on.\\nWe have left the default attack configuration from the repository, using epsilon = 8.0.\", \"single_step\": \"ResNet-32 CNN (eps=8.0): whitebox_succ = 82% | blackbox_succ = 82%\\nPeerNet (eps=8.0): whitebox_succ = 34% | blackbox_succ = 14%\\n\\nFor the iterative attack, we have tried different values of epsilon for CNN and PeerNet to allow fair comparison of the attacks\", \"iterative\": \"ResNet-32 CNN (eps=0.5): whitebox_succ = 41% | blackbox_succ = 41%\\nResNet-32 CNN (eps=8.0): whitebox_succ = 100% | blackbox_succ = 100%\\nPeerNet (eps=8.0): whitebox_succ = 45% | blackbox_succ = 28%\\nPeerNet (eps=12.0): whitebox_succ = 63% | blackbox_succ = 37%\\n\\nFrom the results above, we verify that our method does not cause gradient masking. Athalye et al. 2018 mentions that iterative attacks should be stronger than single step, which in the results above holds. Moreover, it states that black-box attacks should be strict subset of white-box attacks, which is confirmed as well.\\n\\n3. Further, this paper claims ~15% robustness at eps=0.1 (25/255) on CIFAR-10, and a non-zero robustness at eps=0.2 (50/255). No prior work has been able to achieve greater than ~10% accuracy at eps=0.06 (16/255), see Madry et al. 2018.\\n---\\nWe are not aware of any proof that would set the theoretical bound of ~10% accuracy at eps=0.06 on CIFAR-10. We attribute the significant gap to the superiority of our approach. The significant margin can be potentially partially reduced by tuning hyperparameters of the attack methods. Nevertheless, from our observations, PeerNet will still stay very robust.\"}", "{\"title\": \"Requested analysis and comparison\", \"comment\": \"Thank you for the valuable remarks.\\n \\nWe have tested most of the concerns in points 1. - 4. during our experiments. We however could not provide full-extent analysis due to the limited length of the paper. Let us respond to each of the points separately below.\\n\\n1. How does varying the number of nearest neighbors change the network behavior?\\n---\\nWe observe that using small k (~5) doesn't always provide enough information to perform the denoising and the network is therefore less robust against adversarial examples.\\nOn the other hand, having k too high (~20) yields too much regularization and the network original performance decreases more significantly.\\nIn our experiments, we have found k=10 to be a reasonable compromise.\\n\\n2. At test time, a fixed number of images are used for denoising - how does the choice of these images change accuracy or adversarial robustness?\\n---\\nWe refer the reviewer to the Section 3 and Section 4.1. of our paper where this is addressed in detail.\\n\\n3. Does just simple filtering of the feature map, say, by local averaging, perform equally well? \\n---\\nIt does not. We have tried simple smoothing of the feature maps and it not only does not make the network robust against adversarial attacks, but also regularizes the original network too much which results in significant loss in classification accuracy. Moreover, local averaging uses the information from the corrupted image itself to filter the feature map, which could even further amplify the noise.\\n\\n4. When do things start to break down? I imagine randomly replacing feature map values (i.e. with very poor nearest neighbors) will cause robustness and accuracy to go down - was this tested?\\n---\\nThis is of-course true. Obviously, selecting very poor nearest neighbors will definitely break the method as the newly created feature map will not express the original information anymore. In our paper, we even reason that the adversary often tries to fool the KNN algorithm directly, as we mention at the end of Section 4.3.2.\\nMoreover, we believe that our results show when do things start to \\\"break down\\\". We explicitly mention that an unbounded attack will always fool the network. Also, our figures in the main text and tables in the supplementary material show that with increasing magnitude of the perturbation, things start to \\\"break down\\\". \\n\\n5. Based on the paper of Athalye et. al., really the only method worth comparing to for adversarial defense, is adversarial training. It is hard to judge absolute adversarial robustness performance without a baseline of adversarial training.\\n---\\nWe provide an evaluation below as well as add an additional section with the results in the supplementary material.\\n\\nWe have compared our approach to adversarial training method using the code provided by Madry etal. https://github.com/MadryLab/cifar10_challenge.\\nThe ResNet-32 baseline model provided in Tensorflow repository (the same we use as CNN baseline in our paper) was trained using the script provided in the cifar10_challenge repository above.\\nWe have used two training configurations producing two baseline models1 - the default one provided by the repository (ResNet-32 CNN A) and then the same one as in our paper (ResNet-32 CNN B).\\nPeerNet was trained traditionally without adversarial training. \\nThe attack was left as defined by the repository by Madry etal.\", \"resnet_32_cnn_a\": \"original_acc = 78.86% | adversarial_acc = 45.47%\", \"resnet_32_cnn_b\": \"original_acc = 75.59% | adversarial_acc = 42.53%\", \"peernet\": \"original_acc = 77.44% | adversarial_acc = 64.76%\\n\\nResults show superiority of PeerNet on this benchmark. PeerNet was trained without considering any specific attacks and still outperforms ResNet-32 CNN, which was adversarially trained using this specific attack, by margin of 20%.\"}", "{\"title\": \"Scalability of our method\", \"comment\": \"Thank you for the insightful comments.\\n\\n1. It is stated that future work will aim at scaling PeerNets to benchmarks like ImageNet, but it is unclear how this could be done. Is there any hope this could be applied to problems like 3D imaging data or videos?\\n---\\nRegarding the concerns on method scalability. The current bottleneck of our approach is processing of all feature maps pixel-wise. We see potential of scaling our approach by operating on superpixels, or NxN patches, instead of processing all pixels individually.\"}", "{\"comment\": \"It appears this paper is causing gradient masking. Looking at Figure 5, FGSM is a more effective attack than PGD at eps=0.1, which is highly suspicious and is listed as one of the tests for identifying gradient masking in Athalye et al. 2018.\\n\\nThe authors may wish to try black-box attacks (e.g., SPSA by Uesato et al at ICML'18) to see if this is happening.\\n\\nFurther, this paper claims ~15% robustness at eps=0.1 (25/255) on CIFAR-10, and a non-zero robustness at eps=0.2 (50/255). No prior work has been able to achieve greater than ~10% accuracy at eps=0.06 (16/255), see Madry et al. 2018.\", \"title\": \"The defense appears to be causing gradient masking\"}", "{\"title\": \"Baseline comparisons\", \"comment\": \"Thanks for suggesting the reference, which we will include in the revision. Considering the paper by Athalye et al. (https://arxiv.org/abs/1802.00420), most of the recently proposed defenses are ineffective. We actually refer to this fact in Section 4.3.2, where the 2nd paragraph points out that the way we train our model should make it more robust compared to the baselines, which, by the nature of their use, are prone to the problem of obfuscated gradients.\\n\\nWe however needed to select some baselines to compare to, and chose the methods that are, in some sense, similar to our approach.\"}", "{\"title\": \"Additional comparison\", \"comment\": \"Thank you for your comments.\\n\\nCan you compare with adversarial training? \\n---\\n\\n1. We tried ResNet-32 with adversarial training. The resulting combination is not efficient, as also reported by Dezfooli et al. \\n\\n2. We used the default parameters suggested by foolbox. During PGD iterations, we have observed that the update in perturbation strength (increase or decrease) is diminishing into orders of 1-e3 over the iterations and we therefore did not expect that increasing the number of iterations would give us any noticeable improvement. \\n\\nFurther, have you considered the CW attack, gradient-free attacks and adaptive versions?\\n---\\n\\n3. In our work, we mainly focused on gradient-based white box attacks and compared primarily against universal adversarial perturbations by Dezfooli etal. and also against some of the classic gradient-based attacks (gradient descent, FGSM, PGD). We have not tried gradient-free attacks. \\n\\nFollowing your suggestion, we evaluated out method on CW L2 attack, which is newly implemented in foolbox. To provide a fair comparison, we used the same settings for PeerNets and CNN baseline. The results are reported below:\", \"cnn_baseline\": \"L2 error = 49.86 | Linf error = 4.327 | fooling rate = 100%\", \"peernets\": \"L2 error = 365.59 | Linf error = 27.43 | fooling rate = 93.76%\\n\\nThis clearly shows that PeerNets are more robust to CW L2 attack (the perturbation required to achieve a similar fooling rate on PeerNet has to be nearly an order of magnitude stronger).\"}", "{\"title\": \"interesting work introducing graph neural nets as regularization, with practical limitations\", \"review\": \"The paper presents a an interesting novel approach to train neural networks with so called peer regularization which aims to provide robustness to adversarial attacks. The idea is to add a graph neural network to a spatial CNN. A graph is defined over similar training samples which are found using a Monte Carlo approximation.\\n\\nThe regularization using graphs reminds me of recent work at ICML on semi-supervised learning (Kamnitsas et al. (2018) Semi-supervised learning via compact latent space clustering) which is using a graph to approximate cluster density which acts as a regularizer for training on labelled data.\\n\\nThe main problem I see with these approaches is that they rely on sufficiently large batch sizes which could be (currently) problematic for many real-world applications. Memory and computation limitations are mentioned, but not sufficently discussed. It would be good to add further details on practical limitations.\\n\\nExperiments are limited to benchmark data using MNIST, CIFAR-10, CIFAR-100. Comprehensive evaluation has been carried out with insightful experiments and good comparison to state-of-the-art. Both white- and black-box adversarial attacks are explored with promising results for the proposed approach.\\n\\nHowever, it is difficult to draw conclusions for real-world problems of larger scale. The authors state that proposed framework can be added to any baseline model, but miss to clearly mention the limitations. It is stated that future work will aim at scaling PeerNets to benchmarks like ImageNet, but it is unclear how this could be done. Is there any hope this could be applied to problems like 3D imaging data or videos?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Analysis and experimental comparisons are lacking\", \"review\": \"After reading the authors' response, I'm revising my score upwards from 5 to 6.\\n\\nThe authors propose a defense against adversarial examples, that is inspired by \\\"non local means filtering\\\". The underlying assumption seems to be that, at feature level, adversarial examples manifest as IID noise in feature maps, which can be \\\"filtered away\\\" by using features from other images. While this assumption seems plausible, no analysis has been done to verify it in a systematic way. Some examples of verifying this are:\\n\\n1. How does varying the number of nearest neighbors change the network behavior?\\n2. At test time, a fixed number of images are used for denoising - how does the choice of these images change accuracy or adversarial robustness?\\n3. Does just simple filtering of the feature map, say, by local averaging, perform equally well? \\n4. When do things start to break down? I imagine randomly replacing feature map values (i.e. with very poor nearest neighbors) will cause robustness and accuracy to go down - was this tested?\\n\\nBased on the paper of Athalye et. al., really the only method worth comparing to for adversarial defense, is adversarial training. It is hard to judge absolute adversarial robustness performance without a baseline of adversarial training.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"comment\": \"Can you compare with adversarial training? Also, did you vary the PGD iterations and verify that the accuracy does not change?\\n\\nFurther, have you considered the CW attack, gradient-free attacks and adaptive versions?\", \"title\": \"Comparison with Madry et al. 2017\"}", "{\"comment\": \"For what it's worth, both of these defenses you compare against are ineffective ( https://arxiv.org/abs/1711.08478 ).\", \"title\": \"Neither MagNet nor BRELU are effective\"}" ] }
B1EjKsRqtQ
Hierarchical Attention: What Really Counts in Various NLP Tasks
[ "Zehao Dou", "Zhihua Zhang" ]
Attention mechanisms in sequence to sequence models have shown great ability and wonderful performance in various natural language processing (NLP) tasks, such as sentence embedding, text generation, machine translation, machine reading comprehension, etc. Unfortunately, existing attention mechanisms only learn either high-level or low-level features. In this paper, we think that the lack of hierarchical mechanisms is a bottleneck in improving the performance of the attention mechanisms, and propose a novel Hierarchical Attention Mechanism (Ham) based on the weighted sum of different layers of a multi-level attention. Ham achieves a state-of-the-art BLEU score of 0.26 on Chinese poem generation task and a nearly 6.5% averaged improvement compared with the existing machine reading comprehension models such as BIDAF and Match-LSTM. Furthermore, our experiments and theorems reveal that Ham has greater generalization and representation ability than existing attention mechanisms.
[ "attention", "hierarchical", "machine reading comprehension", "poem generation" ]
https://openreview.net/pdf?id=B1EjKsRqtQ
https://openreview.net/forum?id=B1EjKsRqtQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "S1lQ-WYexV", "Sylhf4LAh7", "B1eHy_bahX", "BkeUUGo937", "ByeRPZke5Q" ], "note_type": [ "meta_review", "official_review", "official_review", "official_review", "comment" ], "note_created": [ 1544749307264, 1541461012147, 1541375965247, 1541218893949, 1538416998031 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper478/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper478/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper478/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper478/AnonReviewer1" ], [ "(anonymous)" ] ], "structured_content_str": [ "{\"metareview\": \"The authors propose a hierarchical attention layer which combines intermediate layers of multi-level attention. While this is a simple idea, and the authors show some improvements over the baselines, the authors raised a number of concerns about the validity of the chosen baselines, and the lack of more detailed evaluations on additional tasks and analysis of the results. Given the incremental nature of the work, and the significant concerns raised by the reviewers, the AC is recommending that this paper be rejected.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Incremental work, with limited experimental validation\"}", "{\"title\": \"Lacks Novelty , incomplete results\", \"review\": \"Overall, this is an incremental paper.\\nThe authors propose a hierarchical attention layer, which computes an aggregation of self attention layer outputs in the multi level attention model. This seems like a small improvement.\\n\\nThere are results using this hierarchical attention layer instead of the vanilla attention layers on Machine Reading Comprehension and Chinese Poem Generation. The authors should have also included results on more tasks to show the clear improvement of the proposed method.\", \"the_issues_with_this_paper_are\": [\"Aggregating weights of different layers has been an idea explored before (Elmo, Cove, etc.). So the model improvement itself seems small.\", \"Lack of strong experimental evidence. In my regard, the experiments are somewhat incomplete. In both the tasks, the authors compare only the vanilla model (BIDAF, MatchLSTM, R-NET) and the model with HAM layers. It is not clear where the improvement is coming from. It would have made sense to compare the number of parameters and also, using the same number of vanilla attention layers which outputs the last layer and compare it to the one proposed by the authors.\", \"Since the argument is towards using weighted average rather than the last layer, there should have been a more detailed analysis on what was the weight distribution and on how important were representations from different layers.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"A few major issues\", \"review\": \"The paper proposes to enhance existing multi-level attention (self-attention) mechanism by obtaining query and key vectors (= value vectors) from all levels after weighted-averaging them. The paper claims that this is also theoretically beneficial because the loss function will converge to zero as the number of layers increase. It claims that the proposed architecture outperforms existing attention-based models in English MRC test (SQuAD), Chinese MRC test, and Chinese poem generation task.\\n\\nI find three major issues in the paper.\\n\\n1. I think the proposed hypothesis lacks the novelty that ICLR audience seeks for. Through many existing architectures (ResNet, ELMo), we already know that skip connection between CNN layers or weighted average of multiple LSTM layers could improve model significantly. Perhaps this could be an application paper that brings existing methods to a slightly different (attention) domain, but not only such paper is less suitable for ICLR, but also it would require strong experimental results. But as I will detail in the second point, I also have some worries about the experiments. \\n\\n2. The experimental results have problems. For English MRC experiment (SQuAD), the reproduced match-LSTM score is ~10% below the reported number in its original paper. Furthermore, it is not clear whether the improvement comes from having multiple attention layers (which is not novel) or weighted-averaging the attention layers (the proposed method). BiDAF and match-LSTM have single attention layers, so it is not fair to compare them with multi-layer attention. \\n\\n3. Lastly, I am not sure I understood the theoretical section correctly, but it is not much interesting that having multiple layers allow one to approach closer to zero loss. In fact, any sufficiently large model can obtain close-to-zero loss on the training data. This is not a sufficient condition for a good model. We cannot guarantee if the model has generalized well; it might have just overfit to the training data.\", \"a_few_minor_issues_and_typos__on_the_paper\": [\"First para second sentence: In -> in\", \"First para second sentence: sequence to sequence -> sequence-to-sequence\", \"Second last para of intro: sentence fragment\", \"Figure 3: would be good to have English translation.\"], \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"A simple extension of multi-level attention, but needs more extensive comparison to existing methods\", \"review\": \"The paper introduces hierarchical attention, where they propose to weighted combine all the intermediate layers of multi-level attention. The idea is simple and seems to be promising, however originality seems incremental.\\n\\nIn order to fully demonstrate the significance of the proposed algorithm, the authors should conduct more comparisons, for example, to multi-level attention. Just comparing with one-level attention seems unfair given the significant increase of computation. Another aspect of comparison may be to consider computation and performance improvements together and discuss the best trade-off. The authors should also include some standard benchmark datasets for comparisons. The current ones are good but it is not so clear what is the best state-of-the-arts results on them when compared with all other methods.\\n\\nThe analysis on the network's representation and convergence is nice but it does not bring much insights. The argument for decreasing global minimal of the loss function in terms of increasing parameter size can be made for nearly all models but it is of little practical use since there is no guarantee one can reach the global optimal of these models.\\n\\nI recommend the authors to analyze/demonstrate how effective this weighted combination is. For example, the paper can benefit from some clear examples that show the learned weights across the layers and which ones are more important.\\n\\nThe presentation of the paper needs some polishing. For example, there are numerous typos, grammatical errors everywhere.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"comment\": \"The papers presents a fairly simple addition to the prevailing attention mechanism in form of an attention layer along the depth. The authors test the new architecture on a couple of important NLP tasks and beat the existing state of the art approaches. The paper is clearly written and easy to follow, though the formatting and grammar could be improved.\", \"title\": \"Simple yet intuitive modification to Attention mechanism gives impressive results\"}" ] }
r1fiFs09YX
Sample-efficient policy learning in multi-agent Reinforcement Learning via meta-learning
[ "Jialian Li", "Hang Su", "Jun Zhu" ]
To gain high rewards in muti-agent scenes, it is sometimes necessary to understand other agents and make corresponding optimal decisions. We can solve these tasks by first building models for other agents and then finding the optimal policy with these models. To get an accurate model, many observations are needed and this can be sample-inefficient. What's more, the learned model and policy can overfit to current agents and cannot generalize if the other agents are replaced by new agents. In many practical situations, each agent we face can be considered as a sample from a population with a fixed but unknown distribution. Thus we can treat the task against some specific agents as a task sampled from a task distribution. We apply meta-learning method to build models and learn policies. Therefore when new agents come, we can adapt to them efficiently. Experiments on grid games show that our method can quickly get high rewards.
[ "Multi-agent", "Reinforcement Learning", "Meta-learning" ]
https://openreview.net/pdf?id=r1fiFs09YX
https://openreview.net/forum?id=r1fiFs09YX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "rJghEP5Gl4", "SJxGZz8vTX", "SkeMpWLvpm", "BygUC9EDnX" ], "note_type": [ "meta_review", "official_review", "official_review", "official_review" ], "note_created": [ 1544886067766, 1542050297788, 1542050233547, 1540995789902 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper477/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper477/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper477/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper477/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"The paper extends MAML so that a learned behavior can be quickly (sample-efficiently) adapted to a new agend (allied or opponent). The approach is tested on two simple tasks in 2D gridworld environments: chasing and path blocking.\\n\\nThe experiments are very limited, they do not suffice to support the claims about the method. The authors did not enter a rebuttal and all the reviewers agree that the paper is not good enough for ICLR.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Interesting approach to sample-efficient MARL, but the experiments do not support the claims\"}", "{\"title\": \"Unclear details, unconvincing experiments\", \"review\": \"This paper focuses on fast adaptation to new behaviour of the other agents of the environment, be it opponents or allies. To achieve this, a method based on MAML is proposed, with two main components:\\n1) Learn a model of some characteristics of the opponent, such as \\\"the final goal, next action, or any other character we wish to predict\\\"\\n2) Learn a policy that takes as input the output of the model and the state, and that outputs the action of the agent.\\n\\nThe goal is that after a phase of meta learning, where the agents learns how to play against some new agents sampled from the distribution of opponents, it can quickly adapt to a new unseen agent. (\\\"Experimental results show that the agent can adapt to the new opponent with a small number of interactions with the opponent\\\")\\n\\nWhile the motivation of this work is clear and the goal important for the RL community, the experiments fail to support the claim above.\\n\\nThe first task they demonstrate their approach on is a chasing game, where the opponent has a private goal cell it tries to reach, and the agent has to chase it. At the end of the game, it gets a reward of 10 if it is on the same cell, 5 if in an adjacent cell, and 0 otherwise. The exact details of the dynamic are not really clear, for example what happens in the event of a collision is not mentionned, and the termination condition is not mentionned either. (the text reads \\\"One game lasts for at least 15 steps\\\", maybe it was meant to be \\\"at most 15 steps\\\" ?).\\nThe first incoherent aspect of this experiment is that they use 800 iterations of meta-learning, and then, when testing, they fine-tune their networks against each test opponent during 4000 games. That is, they use 5 times more game when fine-tuning as opposed to when pre-training, which contradicts the claim \\\"the agent can adapt to the new opponent with a small number of interactions with the opponent\\\" (this is not really few-shot learning anymore).\\nFurther more, they compare their approach with various ablations of it: they either remove the meta-learning for the model (MA), for the policy (MO), or both (NM). The description of the NM baseline is not very precise, but it seems that it simply boils down to a regular (dueling) DQN: In this setting, since the opponent appears to have a fixed goal, finetuning against a single opponent simply boils down to learning a policy that reaches a specific cell of the grid, which we can expect DQN to solve perfectly on a 8x8 grid with 4000 training games. And yet, the curves for NM in graph 2c is not only really noisy, but also falls far from the optimum, which the authors don't discuss. There might be a problem with the hyperparameters used or the training loop.\", \"the_second_task_is_a_blocking_game\": \"the opponent has to choose amongst 5 paths to get to the top, and the agent has to choose the same path in order to block it. The action space should be precisely described, as it stands it is difficult to understand the dynamic. There are at least two possible ways to parametrize the actions:\\n1) Similarly to the blocking game, the agents could move in the 8 directions. In that case, based on the picture 3a, it seems that the agent can just mirrors the move of the opponent: since the moves are simultaneous, that would mean that the agent is always one step late, but each path is long enough for the agent to reach the exit before its opponent (it explicitly stated that the agent needs to block the exit, and that the opponent will not change path during one game). That would imply that perfect play is possible without any meta-learning or oponent modeling, and once again the NM baseline (or any vanilly DQN/Policy gradient method) should perform much better.\\n2) One other alternative is to have an action space of 5 actions, which correspond to the 5 paths. In that case the game boils down to a bandit, since both agents only take one action. Note that under this assumption, the random policy would get the right path (and reward +10) with probability 1/5 and a wrong one (reward -10) with probability 4/5, which leads to an expectated reward of -10*4/5 + 10/5 = -6. This is not consistent with the graph 3c, since at the beginning of the training, the NM agent should have a random policy, and yet the graph reports an average reward of -10 (the -6 mark seems to be reached after ~1000 episodes)\\n\\nThe last task boils down to one opponent that reaches one cell on the right, and the agent must reach the matching cell on the left. In this setting, the same discussion on the action space as the second task can be made. We note that the episode for 16 steps, and the distance from the center to any cell is at most 4 steps: an optimal policy would be to wait for 4 steps in the middle, and as soon as the opponent has reached its goal, use the remaining 12 steps to get to the mirror one. Once again, this policy doesn't require any prediction on the opponent's goal, and it's hard to believe that DQN (possibly with an lstm) is not able to learn that near perfectly.\", \"in_a_last_test_the_authors_compare_the_performance_of_their_algorithms_in_a_one_shot_transfer_setting\": \"they sample 100 opponents for each task and play only one game against it (no fine-tuning). It is not clear whether special care has been taken to ensure that none of the sampled opponents has already been seen during training.\\nWe note that the rewards reported for MO and MA (resp 0.0 and -0.08) are not consistent with the description of the reward function: on the worst case, the opponent chooses a goal on one extreme (say y1 = 1) and the agent chooses an object on the other end (say y2 = 7). In that case, the reward obtained is sampled from a gaussian with mean \\\\mu = 10 - 3/2 * |y1 - y2| (which in this case evalutes to 1), and variance 1. This is highly unlikely to give such a low average reward over 100 episodes (note that this is worst case, if the opponent's goal is not on the extreme, the expected reward is necessarily higher). One possibility is that the agent never reaches an object, but in that case it would imply the that the meta-learning phase was problematic.\\nWe also note that it is explicited that the MOA, MO and MA methods are tested after meta-training, but nothing is precised for NM. Has it been trained at all? Against which opponents? Is it just a random policy? There are too many missing details for the results to be interpretable.\\n\\n\\nApart from that, the paper contains a significant amount of typos and gramatical mistakes please proof-read carefully. Some of them are:\\n\\\"To demonstrate that meta-learning can do take\\\"\\n\\\"player 1 is the red grid and player 1 is the green one\\\"\\n\\\"we further assume that there exist a distribution\\\"\\n\\\" the goal\\u2019s location over the map is visualize in figure\\\"\\n\\\"Both players takes actions simultaneously\\\"\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"interesting idea, but the experiments do not validate the approach for opponent modeling\", \"review\": \"The paper presents an approach to multi-agent learning based on the framework of model-agnostic meta learning. The originality of the approach lies in the decomposition of the policy in two terms, with applications to opponent modeling: the first part of the policy tries to predict some important characteristic of the agent (the characteristic itself is prior knowledge, the value it takes for a particular opponent is learnt from observations). The second part of the policy takes the estimated characteristic of the opponent as input, the current state and produces the action. All networks are trained within the MAML framework. The overall approach is motivated by the task of opponent modeling for multi-agent RL.\\n\\nThe approach makes sense overall -- the \\\"value\\\" of the opponent is valuable prior knowledge. The originality is limited though. In this kind of paper, I would expect the experiments to make a strong case for the approach. Unfortunately, the experiments are extremely toyish and admittedly not really \\\"multi-agent\\\": the \\\"opponent\\\" has a fixed strategy that does not depend on what the current is doing (it is therefore not really an opponent). The experimental protocol is more akin to multitask RL than multi-agent RL, and it is unclear whether the approach could/should work for opponent modeling even on tasks of low complexity. In other words, the experimental section does not address the problem that is supposed to be addressed (opponent modeling).\", \"other_comments\": [\"\\\"The opponent in our game is considered as some player that won\\u2019t adapt its policy to our agent.\\\" -> in the experiments it is worse than that: the opponents actions do not even depend on what the agent is doing... So admittedly the experiments are not really \\\"multi-agent\\\" (or \\\"multi-agent\\\" where the \\\"opponent\\\" is totally independent of what the agent is currently doing).\", \"\\\"Each method trains 800 iterations to get the meta learners and use them to initialize their networks. Then 10 new opponents are sampled as testing tasks. Four methods all train 4000 games for each testing task.\\\" -> what does 800 iterations mean? Does it mean 800 episodes (it would seem strange for a \\\"fast adaptation task\\\" to have fewer episodes for training than for testing).\", \"\\\"Notice that the reward trend for MOA first drops and then raises as the testing process goes on. This shows the process that the meta-learner adapt to the current task.\\\" -> the adaptation to the new opponent does not really explain the drop?\", \"Figure 3(c): the MA baseline has a reward of ~-10, which is worse than random (a uniform random placement at the 5 strategic positions would get 10*1/5-10*4/5 = -6). On the other hand, MOA achieves very high rewards, which indicates that the \\\"opponents\\\" strategies have low entropy. What is the best achievable reward on the blocking game?\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Overall interesting idea, unclear on technical details, missing an important baseline.\", \"review\": \"This paper proposes to apply MAML to a multi-agent setting. In this formulation each opponent corresponds to a task and two separate parts of the policy are learned via meta-learning:\\n1) the opponent modelling network that predicts the value function for a given opponent based on past actions and states. \\n2) the policy network which takes in the state and the predicted value function of the opponent. \\nThe main concern with this paper is the lack of technical detail and an important missing baseline. The paper also suffers from lacking clarity due to a large number of grammatical mistakes.\", \"technical_detail_and_concerns\": \"The paper mentions Duelling DQN as the RL algorithm in the inner loop. This is very unusual and it's a priori unclear whether MAML with DQN in the inner loop is a sensible algorithm. For example, DQN relies both on a target network and an argmax operator which seem to violate the differentiability requirements needed for MAML regarding higher order gradients. The authors entirely miss this and fail to address possible concerns. \\n\\nThe authors also fail to provide any details regarding the exploration scheme used. In fact, a value function is never mentioned, instead the authors talk about a policy pi^a_i, leaving it unclear how this policy is derived from the value function. When the Q-function takes as input the true opponent, there is no need for meta-learning of the policy: Given a known opponent, the tuple (s_t, opponent) defines a Markov state. As far as I could gather from the paper, the authors are missing a baseline which simply learns a single Q-function across all opponents (rather than meta-learning it per opponent) that takes as input the predicted opponent. \\nMy expectation is that this is more or less what is happening in the paper. The authors also fail to compare and contrast their method to a number of recent multi-agent algorithms, eg. MADDPG, COMA and LOLA. \\n\\nFurthermore, the results are extremely toy and seem to be for single runs , rendering them insignificant. \\n\\nWhile the idea itself is interesting, the above concerns render the paper unsuitable for publication in it's current form.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
ryGiYoAqt7
Learning agents with prioritization and parameter noise in continuous state and action space
[ "Rajesh Devaraddi", "G. Srinivasaraghavan" ]
Reinforcement Learning (RL) problem can be solved in two different ways - the Value function-based approach and the policy optimization-based approach - to eventually arrive at an optimal policy for the given environment. One of the recent breakthroughs in reinforcement learning is the use of deep neural networks as function approximators to approximate the value function or q-function in a reinforcement learning scheme. This has led to results with agents automatically learning how to play games like alpha-go showing better-than-human performance. Deep Q-learning networks (DQN) and Deep Deterministic Policy Gradient (DDPG) are two such methods that have shown state-of-the-art results in recent times. Among the many variants of RL, an important class of problems is where the state and action spaces are continuous --- autonomous robots, autonomous vehicles, optimal control are all examples of such problems that can lend themselves naturally to reinforcement based algorithms, and have continuous state and action spaces. In this paper, we adapt and combine approaches such as DQN and DDPG in novel ways to outperform the earlier results for continuous state and action space problems. We believe these results are a valuable addition to the fast-growing body of results on Reinforcement Learning, more so for continuous state and action space problems.
[ "reinforcement learning", "continuous action space", "prioritization", "parameter", "noise", "policy gradients" ]
https://openreview.net/pdf?id=ryGiYoAqt7
https://openreview.net/forum?id=ryGiYoAqt7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "HyeK5OMJxV", "BkeQx84T2X", "S1ez5NE52m", "r1ePXjQXh7" ], "note_type": [ "meta_review", "official_review", "official_review", "official_review" ], "note_created": [ 1544657041351, 1541387755300, 1541190793866, 1540729631512 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper476/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper476/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper476/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper476/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"The authors take two algorithmic components that were proposed in the context of discrete-action RL - priority replay and parameter noise - and evaluate them with DDPG on continuous control tasks. The different approaches are nicely summarized by the authors, however the contribution of the paper is extremely limited. There is no novelty in the proposed approaches, the empirical evaluation is inconclusive and limited, and there is no analysis or additional insights or results. The AC and the reviewers agree that this paper is not strong enough for ICLR.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"metareview\"}", "{\"title\": \"Barely any novelty\", \"review\": \"The paper proposes an augmentation of the DDPG algorithm with prioritized experience replay plus parameter noise. Empirical evaluations of the proposed algorithm are conducted on Mujoco benchmarks while the results are mixed.\\n\\nAs far as I can see, the paper contains almost no novelty as it crudely puts together three existing algorithms without presenting enough motivation. This can be clearly seen even from the structuring of the paper, since before the experimental section, only a short two-paragraph subsection (4.1) and an algorithm chart are devoted to the description of the main ideas. Furthermore, the algorithm itself is a just simple addition of well-known techniques (DDPG + prioritized experience replay + parameter noise) none of which is proposed in the current paper. Finally, as shown in the experimental sections, I don't see a evidence that the proposed algorithm consistently outperform the baseline.\\n\\nTo sum up, I believe the submission is below the novelty threshold for a publication at ICLR.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Interesting paper but limited novelty\", \"review\": \"This paper combines elements of two existing reinforcement learning approaches, namely, Deep Q-learning Networks (DQN) with Prioritised Experience Replay (PER) and Deep Deterministic Policy Gradient (DDPG) to propose the Prioritized Deep Deterministic Policy Gradient (PDDPG) algorithm. The problem is interesting and there is a nice review of relevant work. The algorithm has a limited novelty with a simple modification of the DDPG algorithm to add the PER component. Experiment results show improvements in certain simulation environments. However, the paper lacks insight on how and why results are improved on some settings while performing worse than the others. Detailed comments are as follows:\\n\\n1. Algorithm 1 is not self-contained. Yes, I understand that it is a slight modification to DDPG with changes being Line 11 and 16. But p_i^alpha is not defined anywhere in Algorithm 1. How the transition probabilties are updated on Line 16 is also not clear to me.\\n\\n2. It would be better if multiple simulation runs on the same experiment can be performed to have a more reliable display of performance.\\n\\n3. Section 6 is on Parameter Space Noise for Exploration. This is not the authors' proposed work so it is strange to have a separate section here. In the end of Section 1, the authors wrote that \\\"We then use the concept of parameter space noise for exploration and show that this further improves the rewards achieved.\\\" This seems to be a bold claim from the varying performance displayed in Figure 2-4. Similar to Comment 2, more simulation runs and statistical tests need to be conducted to support this claim.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Limited novelty\", \"review\": \"The paper proposes PDDPG, a combination of prioritized experience replay, parameter noise exploration, and DDPG. Different combinations are then evaluated on MuJoCo domains, and the results are mixed.\\n\\nThe novelty of the work is limited, and the results are hard to interpret: sometimes PDDPG performs better, sometimes worse, and the training curves are only obtained with a single random seed. Also presented results are substantially worse than current state of the art (e.g., TD3, SAC).\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
B1gstsCqt7
Sparse Dictionary Learning by Dynamical Neural Networks
[ "Tsung-Han Lin", "Ping Tak Peter Tang" ]
A dynamical neural network consists of a set of interconnected neurons that interact over time continuously. It can exhibit computational properties in the sense that the dynamical system’s evolution and/or limit points in the associated state space can correspond to numerical solutions to certain mathematical optimization or learning problems. Such a computational system is particularly attractive in that it can be mapped to a massively parallel computer architecture for power and throughput efficiency, especially if each neuron can rely solely on local information (i.e., local memory). Deriving gradients from the dynamical network’s various states while conforming to this last constraint, however, is challenging. We show that by combining ideas of top-down feedback and contrastive learning, a dynamical network for solving the l1-minimizing dictionary learning problem can be constructed, and the true gradients for learning are provably computable by individual neurons. Using spiking neurons to construct our dynamical network, we present a learning process, its rigorous mathematical analysis, and numerical results on several dictionary learning problems.
[ "dynamical neural networks", "spiking neural networks", "dynamical system", "hardware friendly learning", "feedback", "contrastive learning", "dictionary learning", "sparse coding" ]
https://openreview.net/pdf?id=B1gstsCqt7
https://openreview.net/forum?id=B1gstsCqt7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "B1lYJijHeN", "S1xN4BLApX", "rJgVjEUATX", "Hyl2hXI067", "rklBey5vTQ", "SJe04G-g67", "ryxykzEyam" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1545087712764, 1542509867775, 1542509724371, 1542509491703, 1542065900557, 1541571125648, 1541517782892 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper475/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper475/Authors" ], [ "ICLR.cc/2019/Conference/Paper475/Authors" ], [ "ICLR.cc/2019/Conference/Paper475/Authors" ], [ "ICLR.cc/2019/Conference/Paper475/AnonReviewer4" ], [ "ICLR.cc/2019/Conference/Paper475/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper475/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"While there has been lots of previous work on training dictionaries for sparse coding, this work tackles the problem of doing son in a purely local way. While previous work suggests that the exact computation of gradient addressed in the paper is not necessarily critical, as noted by reviewers, all reviewers agree that the work still makes important contributions through both its theoretical analyses and presented experiments. Authors are encouraged to work on improving clarity further and delineating their contribution more precisely with respect to previous results.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Interesting work on how to train a dictionary from local information\"}", "{\"title\": \"Response\", \"comment\": \"Figure 2 serves to illustrate our theoretical results and shows how the algorithm is run in practice. We revised the caption of Figure 2, providing a more detailed and clear description.\\n\\nWe indeed cited and discussed the early \\\"similarity matching\\\" work (Hu et al. 2014) in our original submission. In our updated paper, we further included a later, more developed work (Pehlevan et al. 2018) in the reference. This line of work focuses on a novel learning objective function, while we study the sparse coding objective function that has been widely studied not only in neuroscience but also in signal processing and machine learning.\"}", "{\"title\": \"Response\", \"comment\": \"1. Our intention to separate B and D is for their different physical meanings: the former corresponds to particular connection weights and the latter is an argument in an optimization problem. The reviewer's feedback to merge B and D certainly seems useful. We will further revise our paper with more succinct mathematical notation but need a little more time to ensure the presentation remains coherent.\\n\\n2. We do not intend to claim to be the first to use feedback connections to train neural networks. This idea has a long history in contrastive learning (as we pointed out in the introduction) and can be even traced back to the recirculation algorithm for training autoencoders (Hinton \\\\& McClelland, 1988) that we discussed in Appendix C.2. We have updated Section 1.2 to incorporate the growing body of work in this direction suggested by the reviewer, and help clarify our contributions.\\n\\n3. The algorithm we established can be mapped to a massively parallel architecture and, in principle, lead to performance gains in two ways: the run time can be shorter due to a higher level of parallelism, and the energy cost of each operation can be lower because the memory is located closer to each computation unit. To further quantify the efficiency advantage of the algorithm, one must design a new parallel computer architecture such as (Davies et al., 2018), since the efficiency gain is coupled with the ability to build hardware differently. Otherwise, there is no easy comparison to make against a conventional numerical optimization algorithm implemented on a general-purpose CPU. The goal of this paper is to effect dictionary learning on this specialized computational model, which is already a non-trivial task, and we hope this work can motivate future study of hardware designs to realize the potential performance gains.\\n\\n4. We have updated Section 1.1 and Appendix C.2 to avoid potential confusions pointed out by the reviewer. In summary, the prior work that we cited all have significant gaps between the learning rules and solving the dictionary learning objective functions. The arguments made by each work are detailed and discussed in Appendix C.2.\\n\\n5. We believe our ability to compute the correct gradient is an important contribution, notwithstanding the reviewer's highly relevant comments. First, it is known to the deep learning and more generally the machine learning community that gradients need not be exact. For example, in the review article Optimization Methods for Large-Scale Machine Learning by Bottou/Curtis/Nocedal in SIAM Review, 2018, Equation 4.7a shows that as long as the approximate gradients on the average are uniformly smaller than 90 degrees, convergence can occur. In practice, however, it is hard to establish such a property in a non-trivial way. Such an acute angle property is satisfied by using an *exact* gradient on a batch of data points. In this case, the approximate gradients are unbiased estimates of the full-batch gradient and thus make a zero degree angle. In the works that we cited in our article that use approximations to gradient, none claim that the approximate gradients employed satisfy the acute angle property. This brings us to our second point. The work by Lillicrap et. al that the reviewer cited shows a number of excellent results. Note that, however, this work does not explicitly prove that the approximate gradient used satisfy the acute angle property. It shows that a random (but fixed) projection of error will lead the weights being so adjusted that the random projection becomes a suitable approximate gradient. Moreover, while successful learning is observed in quite a few examples, the authors established convergence proof in the supplemental material (Note 11) only for a linear network without nonlinear activations. As discussed in Note 16, the theoretical results are limited so far. We thus circumvent many theoretical difficulties by being able to compute exact gradients in the first place.\"}", "{\"title\": \"Revision uploaded\", \"comment\": \"We thank all the reviewers for the detailed and constructive feedback. We have uploaded a revision of our submission to address the concerns as explained in the responses below.\"}", "{\"title\": \"Local Sparse Codes\", \"review\": \"The authors study sparse coding models in which unit activations minimize a cost that combines: 1) the error between a linear generative model and the input data; an 2) the L1 norm of unit activations themselves. They seek models in which both the inference procedure -- generating unit activations in response to each input data example -- and the learning procedure -- updating network connections so that the inferences minimize the cost function -- are local. By \\\"local\\\" they mean that the update to each unit's activation, and the updates to the connection weights, rely only on information about the inputs and outputs from that unit / connection. In a biological neural network, these are the variables represented by pre- and post-synaptic action potentials and voltages, and in hardware implementations, operations on these variables can be performed without substantially coordinating between different parts of the chip, providing strong motivation for the locality constraint(s).\", \"the_authors_achieve_a_local_algorithm_that_approximately_optimizes_the_sparse_coding_objective_function_by_using_feedback\": \"they send the sparse coding units' activities \\\"back\\\" to the input layer through feedback connections. In the case where the feedback connection matrix is the transpose of the sparse coding dictionary matrix (D), the elementwise errors in the linear generative model (e.g., the non-local part of the sparse coding learning rule obtained by gradient descent) are represented by the difference between the inputs and this feedback to the input layer: that difference can be computed locally at the input units and then sent back to the coding layer to implement the updates. The feedback connections B are updated in another local process that keeps them symmetric with the feedforward weights: B= D= F^T throughout the learning process.\\n\\nThe authors provide several theorems showing that this setup approximately solves the sparse coding problem (again, using local information), and show via simulation that their setup shows similar evolution of the loss function during training, as does SGD on the sparse coding cost function.\\n\\nI think that the paper presents a neat idea -- feedback connections are too often ignored in computational models of the nervous system, and correspondingly in machine learning. At the same time, I have some concerns about the novelty and the presentation. Those are described below:\\n\\n1. The paper is unnecessarily hard to read, at least in part due to a lack of notational consistency. As just one example, with B=D, why use two different symbols for this matrix? This just makes is so that your reader needs to keep track mentally of which variable is actually which other variable, and that quickly becomes confusing. I strongly recommend choosing the simplest and most consistent notation that you can throughout the paper.\\n\\n2. Other recent studies also showed that feedback connections can lead to local updates successfully training neural networks: three such papers are cited below. The first two papers do supervised learning, while the third does unsupervised learning. It would be helpful for the authors to explain the key points of novelty of their paper: application of these feedback connection ideas to sparse coding. Otherwise, readers may mistakenly get the impression that this work is the first to use feedback connections in training neural networks.\\n\\nGuerguiev, J., Lillicrap, T.P. and Richards, B.A., 2017. Towards deep learning with segregated dendrites. ELife, 6, p.e22901.\\n\\nSacramento, J., Costa, R.P., Bengio, Y. and Senn, W., 2018. Dendritic cortical microcircuits approximate the backpropagation algorithm. arXiv preprint arXiv:1810.11393.\\n\\nFederer, C. and Zylberberg, J., 2018. A self-organizing short-term dynamical memory network. Neural Networks.\\n\\n3. Given that the performance gains of the locality (vs something like SparseNet) are given such emphasis in the paper, those should be shown from the numerical experiments. This could be quantified by runtime, or some other measure.\\n\\n4. The discussion of prior work is a little misleading -- although I'm sure this is unintentional. For example, at the top of p. 3, it is mentioned that the previous local sparse coding models do not have rigorous learning objectives. But then the appendix describes the learning objectives, and the approximations, made in the prior work. I think that the introduction should have a more transparent discussion of what was, and was not, in the prior papers, and how the current work advances the field.\\n\\n5. The paper -- and especially appendix C2 -- makes strong emphasis of the importance finding local implementations of true gradient descent, as opposed to the approximations made by prior authors. I'm not sure that's such a big deal, given that Lillicrap et al. showed nicely in the paper cited below that any learning rule that is within 90 degrees of true gradient descent will still minimize the cost function: even if an algorithm doesn't move down the steepest path, it can still have updates that always move \\\"downhill\\\", and hence minimize the loss function. Consequently, I think that some justification is needed showing that the current model, being closer to true gradient descent, really outperforms the previous ones. \\n\\nLillicrap, T.P., Cownden, D., Tweed, D.B. and Akerman, C.J., 2016. Random synaptic feedback weights support error backpropagation for deep learning. Nature communications, 7, p.13276.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Review of sparse dictionary learning by dynamical neural networks\", \"review\": \"The seminal work of Olshausen and Field on sparse coding is widely accepted as one of the main sources of inspiration for dictionary learning. This contribution makes the connection from dictionary learning back to a neuronal approach. Building on the Local Competitive Algorithm (LCA) of Rozell et al. and the theoretical analysis of Tang et al., this submission revisits the dictionary learning under two constraints that the gradient is learned locally and that the neural assemblies maintain consistent weight in the network. These constraints are relevant for a better understanding of the underlying principles in neuroscience and for applicative development on neuromorphic chipsets.\\n\\nThe proposed theorems extend the previous work of sparse coding with spiking neurons and address the update of dictionary using only information available from local neurons. The submission cares as well for the possible implementation on parallel architectures. The numerical experiments are conducted on three datasets and show the influence of weight initialization and the convergence on each dataset. An example of image denoising is provided in appendix.\", \"rating\": \"9: Top 15% of accepted papers, strong accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"An interesting, fully local learning approach to sparse coding\", \"review\": \"This paper proposes a dynamical neural network for sparse coding where all the interactions terms are learned. In previous approaches (Rozell et al.) some weights were tied to the others. Here the network consists of feedforward, lateral, and feedback weights, all of which have their own learning rule. The authors show that the learned weights converge to the desired solution for solving the sparse coding objective. This seems like a nice piece of work, an original approach that solves a problem that was never really fully resolved in previous work, and it brings things one step closer to both neurobiological plausibility and hardware implementation.\", \"other_comments\": \"What exactly is being shown in Figure 2 is still not clear to me.\\n\\n It would be nice to see some other evaluations, for example sparsity vs. MSE tradeoff (this is reflected in the objective function in part but it would be nice to see the tradeoff). \\n\\nThere is recent work from Mitya Chklovskii's group on \\\"similarity matching\\\" that also addresses the problem of developing a fully local learning rule. The authors should incorporate a discussion of this in their final paper.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
HkxjYoCqKX
Relaxed Quantization for Discretized Neural Networks
[ "Christos Louizos", "Matthias Reisser", "Tijmen Blankevoort", "Efstratios Gavves", "Max Welling" ]
Neural network quantization has become an important research area due to its great impact on deployment of large models on resource constrained devices. In order to train networks that can be effectively discretized without loss of performance, we introduce a differentiable quantization procedure. Differentiability can be achieved by transforming continuous distributions over the weights and activations of the network to categorical distributions over the quantization grid. These are subsequently relaxed to continuous surrogates that can allow for efficient gradient-based optimization. We further show that stochastic rounding can be seen as a special case of the proposed approach and that under this formulation the quantization grid itself can also be optimized with gradient descent. We experimentally validate the performance of our method on MNIST, CIFAR 10 and Imagenet classification.
[ "Quantization", "Compression", "Neural Networks", "Efficiency" ]
https://openreview.net/pdf?id=HkxjYoCqKX
https://openreview.net/forum?id=HkxjYoCqKX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "HyefUlvSeV", "SJeTsd3tpX", "ByxwE2ROTm", "H1e-rddupm", "H1eXz1E_T7", "r1lmTPcDp7", "SJgl-zVD6m", "B1lLTW4P6X", "BylHD-NwTm", "ryxyLkND6Q", "rygmk1EDT7", "S1gcOCLUpX", "rJevdabGpQ", "BygITb1laQ", "SJgFk25qhQ", "rkgjbEeYnm" ], "note_type": [ "meta_review", "official_comment", "comment", "official_comment", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1545068618492, 1542207653352, 1542151214537, 1542125625141, 1542106891124, 1542068154895, 1542042104313, 1542042046406, 1542041949097, 1542041415125, 1542041306618, 1541987953659, 1541705071121, 1541562813530, 1541217249096, 1541108739209 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper474/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper474/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper474/Authors" ], [ "ICLR.cc/2019/Conference/Paper474/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper474/Authors" ], [ "ICLR.cc/2019/Conference/Paper474/Authors" ], [ "ICLR.cc/2019/Conference/Paper474/Authors" ], [ "ICLR.cc/2019/Conference/Paper474/Authors" ], [ "ICLR.cc/2019/Conference/Paper474/Authors" ], [ "(anonymous)" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper474/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper474/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper474/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"This paper proposes an effective method to train neural networks with quantized reduced precision. It's fairly straight-forward idea and achieved good results and solid empirical work. reviewers have a consensus on acceptance.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"new approach\"}", "{\"title\": \"Clarifications about BOP count\", \"comment\": \"Dear anonymous commenter,\\n\\nOf course, you can find the answers below:\\n\\n1) In our computation of a model\\u2019s BOP count, we do take height and width of the feature maps into account. The formula as stated in [1] is given for \\u201ca single output calculation\\u201d and correspondingly by us multiplied for a whole layer\\u2019s BOP count. \\n\\n2) We merely aim to use the BOP count formula from [1] as a rough estimate of the actual BOPs of a given low-bit model, and not as an exact measure. Our aim was to have a sensible ranking of all the methods we compare. Indeed, for 1 bit weights and activations, the BOP approximation will be worse compared to fixed-point or floating-point networks. We would like to point out that the formula came with its own set of assumptions, which are stated at [1]. We agree that the BOP count is not a perfect measure of model complexity or execution speed, however it does serve as a normalizer for the purpose of comparison. Finally we recognize that execution speed might be identical or higher for example for a 4/4 bit model on a chip with a dedicated 4/4 instruction set compared to a 3/3 bit model on the same chip, due to suboptimal kernels. A similar conclusion could be drawn for a chip that does not possess a fixed-point instruction set when comparing fixed-point to floating-point models. That is to say the final execution speed/accuracy trade-off is very dependent on the targeted hardware and any measure that tries to generalize across different chips will either be very complex or always remain approximative. \\n\\n[1] Baskin, C., Schwartz, E., Zheltonozhskii, E., Liss, N., Giryes, R., Bronstein, A. M., & Mendelson, A. (2018). UNIQ: Uniform Noise Injection for the Quantization of Neural Networks. arXiv preprint arXiv:1804.10969.\"}", "{\"comment\": \"Dear authors,\\n\\nThanks for your answer and I learn a lot. But I find some (potential) mistakes in the BOP metric. \\n\\n1): The width $w$ and height $h$ of the feature map are not included in the layer complexity. And I am sure this should be fixed. \\n\\n2): Let us assume weights and activations are all binary (1-bit). Then the convolutional operations becomes XNOR and popcount, which are all bitwise operations. So according to BOPs, the bitwise popcount complexity (for a single output calculation) is n{k^2}(2 + {\\\\log _2}n{k^2}) . However, it doesn't make sense since this complexity holds for floating-point additions rather than bitwise operations. \\n\\nAnd could you check my claims?\", \"title\": \"Some questions about BOP count metric\"}", "{\"title\": \"Aggregation of feedback\", \"comment\": \"Dear reviewers and commenters,\\n\\nWe have updated the submission to include all of the discussed points, except for the learning curves for VGG as we are currently rerunning the experiments in order to track them. We will perform another update as soon as that is finished. \\n\\nPlease also note that we have updated Figure 4 that contains the Imagenet results. We updated our BOP count implementation to correctly take into account the 8bit input of the models that have a full precision first layer. This resulted in a lower BOP count for these models. Nevertheless we still observe that the RQ models lie on the Pareto frontier, hence the conclusions do not change.\", \"edit\": \"We have uploaded a new version of the paper that contains the VGG learning curves in the appendix.\"}", "{\"title\": \"BOP count metric normalizes some design choices, we targeted generally available chips on the market today\", \"comment\": \"Dear anonymous commenter,\\n\\nThank you for your additional comment. We hope to address your doubts adequately:\\n\\n1) Indeed, implementing RQ for a pre-activation Resnet18 with the hyperparameters that you propose is feasible. Nevertheless, we also believe that it is not necessary: firstly, as we previously mentioned, the GBOP metric that we used in the submission \\u201cnormalizes\\u201d against the choice of having a full precision first and last layer, therefore we can safely conclude that the 8/8 bit RQ model that quantizes everything is better, both BOP wise and accuracy wise, than the 4/4 bit LQ-Net model that does not quantize the first and last layer. Secondly, we chose to experiment with the standard ResNet18 architecture in order to be able to compare with the majority of the quantization literature. As a result, we do not believe that the experiments with the pre-activation ResNet18 will offer additional insight, besides allowing for a slightly more calibrated comparison against e.g LQ-Net or PACT. Instead, we believe that a completely different architecture (MobileNet) better complements our ResNet18 experiments.\\n\\nIn summary, we hope to have convinced you of the practical importance of quantizing the first and last layers. On the side of experiments provided, we believe to have produced significant evidence in favour of RQ. The code to reproduce our results as well as to do additional experiments is currently undergoing regulatory approval. Please stay tuned for our announcement and feel free to contact us with questions about your own re-implementation once the contact details are available.\\n\\n2) Thank you for the pointer to this work; we believe it provides interesting food for thought for future hardware choices. We base our argument not on a specific chipset, but argue for the properties of generally available chips on the market today. Examples of state-of-the-art chips that especially target fixed-point computations include: (Qualcomm) Hexacore 68X, (Intel) movidius, (ARM) NEON. In case the application warrants specialized hardware (ASIC) or FPGAs, there will always be highly efficient specialized solutions that might allow for different bit-precisions (or even mix fixed-point and floating-point representations [1]. However it becomes increasingly difficult to find a fair basis of speed/accuracy comparison when allowing for arbitrary hardware implementations and to account for the additional overhead of e.g. channel-wise grids. Again, we believe our experimental efforts to lay sufficient claim for the validity of RQ by comparing many works that use fixed-point shared grids. Any additional modifications such as mixed-precision, channelwise-grids or any of the other strategies referenced in our paper are orthogonal to our method and it is reasonable to believe that including them will benefit RQ as well.\\n\\n[1] Deep Convolutional Neural Network Inference with Floating-point Weights and Fixed-point Activations: https://arxiv.org/pdf/1703.03073.pdf\", \"edit\": \"After fixing the BOP count metric (see general comment), the 4/4 bit LQ-Net BOP count lies between the 5/5 and 6/6 bit RQ models. In this case we observe that the accuracy of RQ is slightly worse than the LQ-net for an approximately same count of BOPs, which could be explained due to the non-uniform grid and channel-wise kernel quantization.\"}", "{\"comment\": \"Dear authors,\\n\\nThx for your detailed answer, but I still have some doubts.\\n\\n1): You have clarified differences between this submission and [1] via analysis. But I don't think it is too hard to implement your approach using pre-activation ResNet-18 and keep the first and the last layer to full-precision. Without any results provided, I am still not sure to what extend these modifications can influence the performance. After all, a 7% gap on ImageNet is not small.\\n\\n2): You argue that it is not hardware friendly to use a separate quantization grid per channel. However, since you did not implement on any hardware device, your argument cannot convince me. In fact, a NIPS2018 paper [1] this year claims that \\\"heterogeneously binarized (mixed bitwise) systems yield FPGA- and ASIC-based implementations that are correspondingly more efficient in both circuit area and energy efficiency than their homogeneous counterparts.\\\" In this paper, each parameter/activation has different bitwise, but they have shown that it is still efficient to implement on hardware platforms. \\n\\nAnd if you can provide any results here that will be better. Thanks again for your patient answer.\", \"reference\": \"[1]: \\\"Heterogeneous Bitwidth Binarization in Convolutional Neural Networks\\\", NIPS20`18.\", \"title\": \"Response to authors\"}", "{\"title\": \"We will update the related work section and different tasks are left for future work.\", \"comment\": \"Dear Reviewer 2,\\n\\nThank you for your review and comments for approval. \\n\\nWe will make sure to update the related work section with the work of Soudry et al. (2014). As for Williams (1992); to our understanding the focus of that paper was to introduce the unbiased score function estimator REINFORCE for the gradient of an expectation of a non-differentiable function. In this sense, Williams (1992) is more of a related work to the concrete / Gumbel-softmax approaches, rather than the stochastic rounding of Gupta et al. (2015). We will update the submission to include a brief discussion between the REINFORCE and concrete / Gumbel-softmax as choices for the fourth element of RQ. \\n\\nRegarding experiments on different tasks; we agree that it would be interesting to check performance on tasks that require more \\u201cprecision\\u201d, such as regression. We chose classification for this submission, as this provides a large amount of literature to compare against, and leave the exploration of different tasks for future work.\"}", "{\"title\": \"Only 2% of the probability mass is truncated and the regularization aspect is definitely interesting.\", \"comment\": \"Dear Reviewer 1,\\n\\nThank you for your review and comments for approval.\\n\\nRegarding the bias of the local grid approximation; we mentioned in the main text that the local grid is constructed such that points that are within \\\\delta standard deviations from the mean are always part of it. For all of our experiments, we set \\\\delta = 3, which means that, roughly, only 2% of the probability mass of the logistic distribution is truncated. Unfortunately, due to lack of space we moved these experimental details about hyperparameters in the appendix.\\n\\nRegarding the regularization aspect; indeed we observed that for VGG, quantizing to 8/8 bits resulted in consistent improved test errors. We are definitely aware of [https://arxiv.org/abs/1804.05862] and believe that further research in this direction is a fruitful direction.\"}", "{\"title\": \"There is extra overhead involved, we will add exemplary learning curves and non-uniform grids are left for future work.\", \"comment\": \"Dear Reviewer 3,\\n\\nThank you for your review and comments for approval.\", \"addressing_the_first_point_of_training_speed\": \"training a neural network with the proposed method indeed imposes an additional burden in computing and sampling the categorical probabilities over the local grid for every weight and activation in the network. As such, this method introduces an overhead which is not present in methods that rely on deterministic rounding and the straight-through estimator for gradients. As for convergence speed, we will include an exemplary learning curve for the 32/32, 8/8 and 2/2 bit VGG in the appendix.\", \"addressing_your_second_point_about_non_uniform_grids\": \"as you have stated, this method can be easily extended to non-uniform grids. Doing so would only require evaluating the CDF of the continuous signal at different points on the real line. We have mentioned this possibility of non-uniform grids in the conclusion to our work. The reason for why we consider uniform grids only lies in that non-uniform grids, although more powerful, generally do not allow for a straightforward implementation in today\\u2019s low-bit hardware. We mention that we explicitly focus on uniform grids for this specific reason of hardware suitability.\"}", "{\"title\": \"This submission is by no means a duplicate.\", \"comment\": \"Dear anonymous commenter,\\n\\nAlthough the proposed relaxed quantization method shares some similarities with DARTS, this submission is by no means a duplicate. The similarities can be summarized in that both methods consider the computation of gradients through a non-differentiable selection mechanism. In our work, selection happens between grid-points. In DARTS, selection happens between choices of neural network architecture elements. Please note that in our work, we propose to use the relaxation of the categorical choice in order to draw samples, whereas in DARTS, the relaxation is performed by learning a weighted average. \\n\\nWe hope to have interpreted and answered your question appropriately. Please let us know if there are any remaining questions.\"}", "{\"title\": \"We beg to differ on the very poor performance on Imagenet, there are important details that have to be taken into account.\", \"comment\": \"Dear anonymous commenter,\\n\\nThank you for the interest in our work and for bringing [1, 2] into our attention. First of all, we would like to respectfully disagree with the comment of \\u201cvery poor performance on Imagenet\\u201d. More specifically, we believe that there some important differences between e.g. [1] and this work that do not lend to a fair comparison.\\n\\nTo further elaborate, in [1] the authors propose a non-uniform quantization grid while arguing for it being compatible with bit operations. In our work we focus on uniform quantization grids because they lend themselves to straight-forward implementation on current hardware. The more powerful grid proposed in [1] is orthogonal to the contributions of this work and can be further employed to boost the performance of RQ. We will update the paper with an appropriate discussion.\\n\\nIt is also worth pointing out several subtleties w.r.t. the hyperparameters and details of the experiments in [1], that make a fair comparison difficult:\\n\\n - First of all, it seems that [1] used a modified pre-activation ResNet18 architecture (judging from the paper and publicly available code of LQ-net), which is different from the standard ResNet18 architecture that we and the other baselines employed (our ResNet18 was based on https://github.com/fchollet/deep-learning-models/blob/master/resnet50.py). \\n\\n - Secondly, [1, 2, 3] did not quantize the first and last layer of the network; while this can allow for better performance in terms of top1-5 accuracy it also negatively affects the model efficiency, as the BOP count will be (much) higher than our 4/4 model. For example, on a ResNet-18 with 4/4 bits and no quantization of the first and last layer we get approximately 24 GBOPs extra (according to the metric we used in the submission) compared to an 8/8 bit model that quantizes all weights and activations. In this sense, the 8 bit RQ has better accuracy while also maintaining better efficiency than the 4 bit LQ-net. Similar arguments can be made for [2, 3]. \\n\\n - Thirdly, it seems that [1] also used a much more flexible quantization grid construction for the weights; it assumed a separate quantization grid per channel, rather than per an entire layer (as in this work). This further increases the flexibility of the quantization model but it does make hardware implementations more difficult and less efficient. Similarly as before, such a grid construction is easily applied to RQ and can similarly further improve performance.\\n\\nFinally, we did not compare against [3] as it did not provide any results for the architectures we compare against in this paper. Their imagenet results were obtained using a variant of the AlexNet architecture, whereas we compare on the more recent ResNet18 and MobileNet. After reading [1] however, we were made aware of the Resnet18 results presented in their git repo, so we will update the paper with those numbers. Similarly to [1, 2], not quantizing the first and last layer results into worse accuracy / efficiency trade-offs than RQ.\"}", "{\"comment\": \"Dear authors and reviewers,\\n\\nPlease check the performance of the current state-of-the-art approaches [1, 2, 3] on ImageNet. For 4-bit Resnet-18, they can achieve near lossless results. For example, in LQ-Net [1], it only has 0.3% and 0.4% Top1 and Top5 accuracy drop, respectively. But in this paper, it has more than 7% Top-1 accuracy drop. Even uniform quantization approach DOREFA-Net performs much better than this submission.\\nAnd I don't know why this submission just \\\"ignores\\\" these approaches?\\n\\nReferences (Only list three of them) :\\n[1]: \\\"LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep Neural Networks\\\". ECCV2018. \\n[2]: \\\"PACT: Parameterized Clipping Activation for Quantized Neural Networks\\\". https://arxiv.org/pdf/1805.06085\\n[3]: \\\"DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients\\\",\", \"https\": \"//arxiv.org/abs/1606.06160.\", \"title\": \"Very poor performance on ImageNet.\"}", "{\"comment\": \"Isn't this a duplicated submission as DARTS?\", \"https\": \"//openreview.net/forum?id=S1eYHoC5FX\", \"title\": \"Isn't this a duplicated submission as DARTS?\"}", "{\"title\": \"Good paper that proposes an effective method to train neural networks with quantized reduced precision synapses and activations\", \"review\": \"The authors proposes a unified and general way of training neural network with reduced precision quantized synaptic weights and activations. The use case where such a quantization can be of use is the deployment of neural network models on resource constrained devices, such as mobile phones and embedded devices.\", \"the_paper_is_very_well_organized_and_systematically_illustrates_and_motivates_the_ingredients_that_allows_the_authors_to_achieve_their_goal\": \"a quantization grid with learnable position and range, stochastic quantization due to noise, and relaxing the hard categorical quantization assignment to a concrete distribution.\\nThe authors then validate their method on several architectures (LeNet-5, VGG7, Resnet and mobilnet) on several datasets (MNIST, CIFAR10 and ImageNet) demonstrating competitive results both in terms of precision reduction and accuracy.\", \"minor_comments\": [\"It would be interesting to know whether training with the proposed relaxed quantization method is slower than with full-precision activations and weights. It would have been informative to show learning curves comparing learning speed in the two cases.\", \"It seems that this work could be generalized in a relatively straight-forward way to a case in which the quantization grid is not uniform, but instead all quantization interval are being optimized independently. It would have been interesting if the authors discussed this scenario, or at least motivated why they only considered quantization on a regular grid.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"New approach to quantizing activations, SotA/competitive on several real image problems\", \"review\": \"Quality:\\nThe work is well done. Experiments cover a range of problems and a range of quantization resolutions. Related work section in, particular, I thought was very nicely done. Empirical results are strong. \\n\\nIn section 2.2, it bothers me that the amount of bias introduced by using the local grid approximation is never really assessed. How much probability mass is left out by truncating the Gumbel-softmax, in practice?\", \"clarity\": \"Well presented. I believe I'd be able to implement this, as a practitioner.\", \"originality\": \"Nice to see the concrete approximation having an impact in the quantization space.\", \"significance\": \"Quantization has obvious practical interest. The regularization aspect is striking (quantization yielded slightly improved test error on CIFAR-10; is that w/in the error bars?). A recent work [https://arxiv.org/abs/1804.05862] links model compressibility to generalization; while this work is more focused on activations, there is no reason that it couldn't be used for weights as well.\", \"nits\": \"top of pg 6 'reduced execution speeds' -> times, or increased exec speeds\\n'sparcity' misspelled\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Fairly straight-forward ideas but good results and solid empirical work\", \"review\": \"Summary\\n=======\\nThis paper introduces a method for learning neural networks with quantized weights and activations. The main idea is to stochastically \\u2013 rather than deterministically \\u2013 quantize values, and to replace the resulting categorical distribution over quantized values with a continuous relaxation (the \\\"concrete distribution\\\" or \\\"Gumbel-Softax distribution\\\"; Maddison et al., 2016; Jang et al., 2016). Good empirical performance is demonstrated for LeNet-5 applied to MNIST, VGG applied to CIFAR-10, and MobileNet and ResNet-18 applied to ImageNet.\\n\\nReview\\n======\", \"relevance\": \"Training non-differentiable neural networks is a challenging and important problem for several applications and a frequent topic at ICLR.\", \"novelty\": \"Conceptually, the proposed approach seems like a straight-forward application/extension of existing methods, but I'm unaware of any paper which uses the concrete distribution for the express purpose of improved efficiency as in this paper. There is a thorough discussion of related work, although I was missing Williams (1992), who used stochastic rounding before Gupta et al. (2015), and Soudry et al. (2014), who introduced a Bayesian approach to deal with discrete weights and activations.\", \"results\": \"The empirical work is thorough, achieving state-of-the-art results in several classification benchmarks. It would be interesting to see how well these methods perform in other tasks (e.g., compression or even regression), even though the literature on quantization seems to focus on classification.\", \"clarity\": \"The paper is well written and clear.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
rkeXrIIt_4
Understanding the Relation Between Maximum-Entropy Inverse Reinforcement Learning and Behaviour Cloning
[ "Seyed Kamyar Seyed Ghasemipour", "Shane Gu", "Richard Zemel" ]
In many settings, it is desirable to learn decision-making and control policies through learning or from expert demonstrations. The most common approaches under this framework are Behaviour Cloning (BC), and Inverse Reinforcement Learning (IRL). Recent methods for IRL have demonstrated the capacity to learn effective policies with access to a very limited set of demonstrations, a scenario in which BC methods often fail. Unfortunately, directly comparing the algorithms for these methods does not provide adequate intuition for understanding this difference in performance. This is the motivating factor for our work. We begin by presenting $f$-MAX, a generalization of AIRL (Fu et al., 2018), a state-of-the-art IRL method. $f$-MAX provides grounds for more directly comparing the objectives for LfD. We demonstrate that $f$-MAX, and by inheritance AIRL, is a subset of the cost-regularized IRL framework laid out by Ho & Ermon (2016). We conclude by empirically evaluating the factors of difference between various LfD objectives in the continuous control domain.
[ "Inverse Reinforcement Learning", "Behaviour Cloning", "f-divergence", "distribution matching" ]
Accept
https://openreview.net/pdf?id=rkeXrIIt_4
https://openreview.net/forum?id=rkeXrIIt_4
ICLR.cc/2019/Workshop/DeepGenStruct
2019
{ "note_id": [ "Hyg8BQuw5N", "ByeU-VZN9V", "Sygj1wxNq4" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1555690302472, 1555465214064, 1555461858873 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/DeepGenStruct/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper48/AnonReviewer2" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper48/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\"}", "{\"title\": \"Review\", \"review\": \"This paper discusses several adversarial imitation learning algorithms and connects them through f-divergence. A variant of the algorithms is proposed that minimizes forward KLD b/w the expert policy and the student policy.\\n\\nThe f-divergence formulations of adversarial IL in the paper are directly derived from the f-GAN work, which is thus pretty straightforward.\\n\\nThe two hypotheses that matching state marginals (instead of action marginals) is better and that forward KL is better than reverse KL look reasonable.\", \"rating\": \"3: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Maximum-entropy inverse RL methods and comparison with Behaviour Cloning\", \"review\": \"The paper presents a framework to unify different Maximum-entropy inverse RL methods and to compare them with Behaviour Cloning (BC). Specifically, the paper is motivated by the question of why BC performs significantly worse than Maximum-Entropy IRL in the small data regime. The authors hypothesise that the reasons are (i) that BC tries to model conditional policies while other approaches try to match also state marginals and (ii) due to the moment matching properties of BC as opposed to mode seeking divergences that can better match expert policies to learning policies.\\n\\nI believe that the paper presents an interesting unified framework based on f-divergences together with some novel modifications of previous methods. The experiments on continuous optimal control support to some extend the theoretical analysis of the paper. However, a more extensive experimental comparison is needed to draw more clear conclusions.\", \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}" ] }
ByeMHULt_N
Learning Deep Latent-variable MRFs with Amortized Bethe Free Energy Minimization
[ "Sam Wiseman" ]
While much recent work has targeted learning deep discrete latent variable models with variational inference, this setting remains challenging, and it is often necessary to make use of potentially high-variance gradient estimators in optimizing the ELBO. As an alternative, we propose to optimize a non-ELBO objective derived from the Bethe free energy approximation to an MRF's partition function. This objective gives rise to a saddle-point learning problem, which we train inference networks to approximately optimize. The derived objective requires no sampling, and can be efficiently computed for many MRFs of interest. We evaluate the proposed approach in learning high-order neural HMMs on text, and find that it often outperforms other approximate inference schemes in terms of true held-out log likelihood. At the same time, we find that all the approximate inference-based approaches to learning high-order neural HMMs we consider underperform learning with exact inference by a significant margin.
[ "MRF", "latent variable", "Bethe", "UGM", "approximate inference", "deep generative model" ]
Accept
https://openreview.net/pdf?id=ByeMHULt_N
https://openreview.net/forum?id=ByeMHULt_N
ICLR.cc/2019/Workshop/DeepGenStruct
2019
{ "note_id": [ "r1lfOmODqE", "HkxXugn-cV", "SyxgecVYFN" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1555690345587, 1555312746624, 1554758120262 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/DeepGenStruct/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper47/AnonReviewer2" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper47/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\"}", "{\"title\": \"Interesting idea\", \"review\": \"This paper presents an objective for learning latent variable MRFs based on Bethe free energy and amortized inference. It is different from optimizing the standard ELBO in that it does not require sampling (which has large variance) nor it is a lower/upper bound of the log-likelihood for general structured data. On some benchmark with neural HMMs, it is shown that the proposed approach achieves better held-out likelihood than other variational inference based approaches.\\n\\nThis paper presents an interesting idea which blends both the deep generative models research as well as the traditional Bethe free energy formulation. The prelmimarny results seems promising. I wonder how much difficult the saddle-point optimization will become on more complex models comparing with ELBO optimization.\", \"minor_comment\": \"The last equation in Section 2.2: the second summation should be over z_3'.\", \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Clear method for learning deep latent-variable MRFs using Bethe free energy optimization\", \"review\": \"This paper proposed a method for learning deep latent-variable MRF with an optimization objective that utilizes the Bethe free energy. To solve the underlying constraints of Bethe free energy optimizations, the authors proposed to represent the \\\\tau vector using the basis of the subspace of the equality constraints and put the positivity constraints to be part of the objective. By applying these techniques, we obtain a saddle-point optimization objective with trainable inference networks and hence we can train the latent-variable MRF. The authors did some experiments on 2nd and 3rd order HMMs for empirical studies.\", \"pros\": \"1. The paper is well-written and easy to follow.\\n\\n2. The original optimization for Bethe free energy is with constraints. However, the proposed objective function is without constraints, which is easier to train. The authors used the Moore-Penrose pseudoinverse of the constraint matrix V to represent the subspace of \\\\tau, which makes the optimization process easier.\", \"cons_and_questions\": \"1. From the experiment results, it seems that the proposed method is not behaving well compared to the exact methods, if we do not use \\\"exact marginals\\\". I doubt if the performance improvements of \\\"L_F + exact marginals\\\" are due to the \\\"exact marginals\\\", not the proposed method.\\n\\n2. For experiment results of the baseline methods in Table 1 and the proposed method in Table 2, the authors try to compare the PPL performances between them. Are the two experiment settings (one for directed HMMs and one for undirected HMMs) comparable? If they are, then why not putting the two tables together and compared all experiment results between them? If not, is the comparison fair?\\n\\n3. It will be great if the authors can also work on some models where we can not tractably compute log marginal likelihoods, instead of only HMMs.\", \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}" ] }
Bkgzr8LKOV
A Seed-Augment-Train Framework for Universal Digit Classification
[ "Vinay Uday Prabhu", "Sanghyun Han", "Dian Ang Yap", "Mihail Douhaniaris", "Preethi Seshadri" ]
In this paper, we propose a Seed-Augment-Train/Transfer (SAT) framework that contains a synthetic seed image dataset generation procedure for languages with different numeral systems using freely available open font file datasets. This seed dataset of images is then augmented to create a purely synthetic training dataset, which is in turn used to train a deep neural network and test on held-out real world handwritten digits dataset spanning five Indic scripts, Kannada, Tamil, Gujarati, Malayalam, and Devanagari. We showcase the efficacy of this approach both qualitatively, by training a Boundary-seeking GAN (BGAN) that generates realistic digit images in the five languages, and also qualitatively by testing a CNN trained on the synthetic data on the real-world datasets. This establishes not only an interesting nexus between the font-datasets-world and transfer learning but also provides a recipe for universal-digit classification in any script.
[ "Transfer learning", "Generative models", "digit classification", "domain transfer" ]
Accept
https://openreview.net/pdf?id=Bkgzr8LKOV
https://openreview.net/forum?id=Bkgzr8LKOV
ICLR.cc/2019/Workshop/DeepGenStruct
2019
{ "note_id": [ "Bkg5Lqdw54", "BkxVOiAz9E", "Bkg3hb3-9E" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1555692114144, 1555389291577, 1555313076404 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/DeepGenStruct/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper46/AnonReviewer1" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper46/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\", \"comment\": \"Although the scores are below our average acceptance rate, we believe this paper has interesting contributions:\\n\\n1. The set of chosen languages is diverse.\\n2. Synthetic and real dataset for digits in these languages\\n3. Interesting way of using GANs to improve the downstream task.\"}", "{\"title\": \"digit data augmentation\", \"review\": \"This work aims to create handwritten digit data like MNIST in other languages. The authors started with open fonts dataset and then applied image augmentation techniques to add distortions. Finally, the authors collected real handwritten digit data, and trained with BGAN to generate more handwritten like images with labels. The authors showed that direct training on the synthetic dataset gets 60-76% accuracy, and adding a small amount of real-world data gets a substantial improvement.\", \"pros\": \"1. It's clearly written and easy to follow.\\n2. The authors showcase a working example of synthetic-to-real transfer learning, which could be interesting to the broader ML community.\", \"cons\": \"1. Ablation study missing. What would the results be if we just use the GAN generated part, and what if we only use the rest?\\n\\nOverall I think this paper is intersting, but I don't consider it to be very relevant for this Deep Generative Models for Highly Structured Data workshop, since it's a direct application of GAN and might be more relevant for OCR venues.\", \"rating\": \"2: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Review\", \"review\": \"This paper presents new datasets for give languages and proposes a new framework (SAT) for font image datasets generation. I think this paper makes reasonable contribution to the literature.\", \"rating\": \"3: Marginally above acceptance threshold\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}" ] }
SJx-SULKOV
Interactive Image Generation Using Scene Graphs
[ "Gaurav Mittal", "Shubham Agrawal", "Anuva Agarwal", "Sushant Mehta", "Tanya Marwah" ]
Recent years have witnessed some exciting developments in the domain of generating images from scene-based text descriptions. These approaches have primarily focused on generating images from a static text description and are limited to generating images in a single pass. They are unable to generate an image interactively based on an incrementally additive text description (something that is more intuitive and similar to the way we describe an image). We propose a method to generate an image incrementally based on a sequence of graphs of scene descriptions (scene-graphs). We propose a recurrent network architecture that preserves the image content generated in previous steps and modifies the cumulative image as per the newly provided scene information. Our model utilizes Graph Convolutional Networks (GCN) to cater to variable-sized scene graphs along with Generative Adversarial image translation networks to generate realistic multi-object images without needing any intermediate supervision during training. We experiment with Coco-Stuff dataset which has multi-object images along with annotations describing the visual scene and show that our model significantly outperforms other approaches on the same dataset in generating visually consistent images for incrementally growing scene graphs.
[ "Generative Models", "Image Generation", "Adversarial Learning", "Scene Graphs", "Interactive", "Graph Convolutional Network", "Image Translation", "Cascade Refinement Network" ]
Accept
https://openreview.net/pdf?id=SJx-SULKOV
https://openreview.net/forum?id=SJx-SULKOV
ICLR.cc/2019/Workshop/DeepGenStruct
2019
{ "note_id": [ "Bkgw74uw9E", "SklGnjYzqV", "B1gr_eV2tV" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1555690526901, 1555368873651, 1554952301117 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/DeepGenStruct/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper45/AnonReviewer2" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper45/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\", \"comment\": \"Accepted\"}", "{\"title\": \"interactive image generation using scene graphs\", \"review\": \"This paper proposes a conditional adversarial model that iteratively generates images given a scene graph. The scene graph describes the relations between the different objects and components of the image. It is shown that images can be generated iteratively by augmenting the scene graph with new objects and relations, and the existing image content will be maintained.\\n\\nThis work uses a combination of many different building blocks that have recently gained traction in literature, including graph convolutional networks to process the scene graphs, networks for bounding box prediction and conditional generative adversarial networks. These are combined with a variety of loss functions (5 in total).\\n\\nThe resulting system is shown to work reasonably well, but it is quite complex and I feel that the importance of each individual component could be demonstrated better by including some ablations -- what would happen if a GAN were conditioned directly on the output of the GCN that processes the scene graph, for example? Is the intermediate step that produces segmentations and bounding boxes strictly necessary?\\n\\nIn two different places in the manuscript, it is stated that the model is the first of its kind to the authors' knowledge. I find such statements a bit inappropriate when they refer to very specific problem settings. There has definitely been a lot of closely related work e.g. on image generation conditioned on captions (including some that uses scene graphs as an intermediate representation). Stating that the work is presumably the first to use this particular specific combination of input representations and model structure is not very meaningful.\\n\\nThe manuscript contains quite a few grammatical and spelling errors and would benefit from proofreading.\", \"rating\": \"3: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Good performance improvements over baselines, but not entirely clear what techniques are new\", \"review\": \"For scene-graph-to-image generation, this paper proposes some changes to the baseline (Johnson et al 2018) in order to produce the image iteratively. The scene-graph is accumulated over three steps, and for each step an image is produced. The intermediate images are not required; i.e. no intermediate supervision necessary. The proposed model conditions each step on the previous image, and introduces losses to encourage continuity in the sequence of images. The authors show performance improvements on COCO-Stuff in inception score (quality / realisticness) and mean perceptual similarity loss (consistency across steps) compared to Johnson et al.\", \"pros\": [\"Performance improvements seem solid, and the examples in Figures 3 and 4 seem convincing.\", \"It seems that this work has some novelty in that it is the first to generate real-world images iteratively without intermediate supervision\"], \"cons\": [\"The description of the baseline and proposed model (sections 3.1 and 3.2) would benefit from some more mathematical detail, i.e. introduce some notation in 3.1 so that in 3.2 you can precisely explain how you are changing the model.\", \"As a person who is not familiar with the related work, I was unsure what techniques are new here. The bullet-point list in section 3.2 seems to be a list of the differences between the baseline and the proposed model. As far as I can tell, these differences seem quite minor, except for the first bullet point which sounds like it might be quite complicated (but it's not described in any detail, so perhaps it is simple). This is where some more precise mathematical notation would be useful. Similarly, in the list of losses in section 3.2 I'm not sure what's part of the baseline and what's new.\", \"In particular, while reading the paper I was under some uncertainty about whether the baseline methods (Johnson et al 2018 and Xu et al 2017) are \\\"iterative\\\". Some lines imply they are iterative in some sense (\\\"AttnGANs also begin with a low-resolution image, and then improve it over multiple steps to come up with a final image\\\" / \\\"The layout is passed to a cascaded refinement network which generates an output image at increasing spatial scales.\\\"), but this paper claims to be the first to iteratively generate real-world images without intermediate supervision. So I'm unsure what exactly is different about this paper compared to previous.\", \"This paper doesn't include any human evaluation - instead relying on automatic metrics only. For comparison, Johnson et al 2018 include some human evaluation.\"], \"other_comments\": [\"\\\"We also note that Stage 1 performs the best. From our observations this is because the vividness of the image colors and object definitions is the best at the stage 1, and begin to fade out stage 2 onwards.\\\" Though this explains some more fine-grained ways in which stage 1 is the best (vividness, object definitions), it doesn't explain *why* stage 1 is the best (i.e. why is the model most able to make realistic images on the middle step? If it can make realistic images on step 1 why can't it improve on them on step 2?)\", \"\\\"coarse-to-fine\\\" not \\\"course-to-fine\\\"\"], \"note\": \"Though I am familiar with Deep Learning, I am not very familiar with computer vision, so it is possible that I am missing something in my reading of this paper.\", \"rating\": \"3: Marginally above acceptance threshold\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}" ] }
SJgZSULYdN
HYPE: Human-eYe Perceptual Evaluation of Generative Models
[ "Sharon Zhou", "Mitchell Gordon", "Ranjay Krishna", "Austin Narcomey", "Durim Morina", "Michael S. Bernstein" ]
Generative models often use human evaluations to determine and justify progress. Unfortunately, existing human evaluation methods are ad-hoc: there is currently no standardized, validated evaluation that: (1) measures perceptual fidelity, (2) is reliable, (3) separates models into clear rank order, and (4) ensures high-quality measurement without intractable cost. In response, we construct Human-eYe Perceptual Evaluation (HYPE), a human metric that is (1) grounded in psychophysics research in perception, (2) reliable across different sets of randomly sampled outputs from a model, (3) results in separable model performances, and (4) efficient in cost and time. We introduce two methods. The first, HYPE-Time, measures visual perception under adaptive time constraints to determine the minimum length of time (e.g., 250ms) that model output such as a generated face needs to be visible for people to distinguish it as real or fake. The second, HYPE-Infinity, measures human error rate on fake and real images with no time constraints, maintaining stability and drastically reducing time and cost. We test HYPE across four state-of-the-art generative adversarial networks (GANs) on unconditional image generation using two datasets, the popular CelebA and the newer higher-resolution FFHQ, and two sampling techniques of model outputs. By simulating HYPE's evaluation multiple times, we demonstrate consistent ranking of different models, identifying StyleGAN with truncation trick sampling (27.6% HYPE-Infinity deception rate, with roughly one quarter of images being misclassified by humans) as superior to StyleGAN without truncation (19.0%) on FFHQ.
[ "Generative models", "generative evaluation", "evaluation metric", "human evaluation" ]
Accept
https://openreview.net/pdf?id=SJgZSULYdN
https://openreview.net/forum?id=SJgZSULYdN
ICLR.cc/2019/Workshop/DeepGenStruct
2019
{ "note_id": [ "BJxL0Xuw5E", "Bkl2aSe45E", "rJxJvdtzcE" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1555690446058, 1555461572381, 1555368023477 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/DeepGenStruct/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper44/AnonReviewer1" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper44/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\"}", "{\"title\": \"Review for \\\"HYPE: Human-eYe Perceptual Evaluation of Generative Models\\\"\", \"review\": \"The paper introduces two methods for evaluating generative models, \\\"HYPE_time\\\" and \\\"HYPE_infinity\\\". Both methods use human reaction times or error rates when asked to distinguish fake images from real data.\", \"several_comments_about_the_paper\": \"\", \"pros\": [\"The paper is very well written and clear.\", \"The process is fully automated and reproducible, which is a drastic difference with other user studies this reviewer is aware of. The process is thoroughly documented.\", \"This paper is quite out there, the ideas are to me very novel and reasonably surprising (at the degree of thoroughness that this paper makes use of them). I don't think this is ready for a conference, but in terms of giving a space for more risky ideas to be discussed (as workshops should), I think this paper fits the bill.\", \"As mentioned in the paper, the method seems to be reliable, moderately fast (10 minutes), and measures perceptual fidelity well.\", \"I think the experiments are enough evidence to support the authors' claims.\"], \"cons\": [\"It's undiscussed (and to me unlikely) wether the proposed scores are good for ranking models that are bad (but one a lot worse than the other, such as models in the middle of training), given that distinguishability for bad models might be essentially the same, or require a lot more time and humans to reach confidence intervals that are non-overlapping.\", \"There is an emphasis on 'reality', which ignores 'diversity', and I think the paper should stress more that this in effect is not trying to provide a way to evaluate generative models per se, but simply the 'reality' of the samples.\", \"In the same lines of the above, 'reality' is not the same as 'fidelity', a model that for examples produces reasonable faces but ignores modeling the background of an image distribution has obviously worse sample quality, but a human might think the samples to be more real. Essentially, it focuses to much on the human-centric preconceived notion of reality, rather than a comparison (human based or not) between the true and generated data distributions.\", \"All experiments are on faces, which makes the reader wander wether the accuracy or 'cost-effectiveness' of the method depends on how good humans are at judging the quality of faces.\", \"It's quite expensive to run this thing ($60 makes it unusable for e.g. grid searches).\", \"The method is obviously specific to data types like images where humans have a notion of what a 'real' sample is, which limits the method from ever working in things like representation spaces.\"], \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Timely work, well presented\", \"review\": [\"The paper proposes a framework for human evaluation of generative models of images. It is based on samples, so it is compatible with any flavour of generative model (likelihood-based, adversarial or otherwise). Two different evaluation strategies are proposed: one based on the time it takes for humans to distinguish generated images from real images, and another which simply measures the percentage of images that are wrongly classified.\", \"The implementation of the human evaluation setup is described in appropriate detail, and attention to cost is also given. The results are comprehensive and statistical tests are used to show their significance. The approach is also compared to FID, a computational evaluation metric that is currently popular.\", \"Overall, this is work is timely and it is well presented, so I am in favour of acceptance. Nevertheless I have a few more comments and suggested improvements below:\", \"It is demonstrated that the correlation of HYPE and FID is relatively poor, and it is implied that this demonstrates that FID is a poor metric. However, as the authors state earlier on in the paper, HYPE can only measure realism, not diversity. FID is explicitly constructed to also be affected by sample diversity, so in that light it is not surprising that the two do not correlate very well, and that higher truncation leads to improved HYPE but worse FID scores -- it is well known that truncation reduces diversity of the samples, in favour of improved fidelity. (I do not wish to imply that FID is actually a good metric -- I do believe it is a poor metric, but not for this particular reason.)\", \"While the authors state clearly that HYPE does not measure diversity, I think it would be worth discussing in more detail how one could use human evaluation to measure diversity, as it is arguably a more interesting challenge. As it stands, the HYPE metric could probably be fooled by a \\\"model\\\" which simply stores a few training examples and randomly selects them with equal probability. Also measuring the diversity of the samples in some way would prevent this kind of cheating.\", \"A common issue with human evaluation is ambiguity in the task specification: the raters are instructed to determine which images are real, but they may be prone to misinterpreting the task in a way that biases the results. While rater training and immediate feedback undoubtedly help to limit this effect, it is still worth considering this carefully, and I think a few diagrams or screenshots of the rater interface would be useful additions to the manuscript in this respect.\", \"In the introduction, it is implied that likelihood (measured in the input space) would be the ideal metric for generative models if it were always easy to compute (which it often isn't). Theis et al. (2015), cited there, also call this into question. I find the juxtaposition of this citation and the sentence before it a bit misleading.\"], \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}" ] }
SJgxrLLKOE
Generative Models for Graph-Based Protein Design
[ "John Ingraham", "Vikas K. Garg", "Regina Barzilay", "Tommi Jaakkola" ]
Engineered proteins offer the potential to solve many problems in biomedicine, energy, and materials science, but creating designs that succeed is difficult in practice. A significant aspect of this challenge is the complex coupling between protein sequence and 3D structure, and the task of finding a viable design is often referred to as the inverse protein folding problem. We develop generative models for protein sequences conditioned on a graph-structured specification of the design target. Our approach efficiently captures the complex dependencies in proteins by focusing on those that are long-range in sequence but local in 3D space. Our framework significantly improves upon prior parametric models of protein sequences given structure, and takes a step toward rapid and targeted biomolecular design with the aid of deep generative models.
[ "generative models", "proteins", "structure", "protein sequences", "protein design", "potential", "many problems", "biomedicine", "energy" ]
Accept
https://openreview.net/pdf?id=SJgxrLLKOE
https://openreview.net/forum?id=SJgxrLLKOE
ICLR.cc/2019/Workshop/DeepGenStruct
2019
{ "note_id": [ "BklW7VOwcN", "HkecY49WqE", "H1ejhIc0tE" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1555690521069, 1555305601541, 1555109554545 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/DeepGenStruct/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper42/AnonReviewer1" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper42/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\", \"comment\": \"The authors propose new model with geometric features for modeling 3d structure. There were some concerns with regard to clarity (e.g. how does decoding work?), which should be addressed for the camera-ready.\"}", "{\"title\": \"Interesting but experiments are lacking\", \"review\": \"This paper proposes a Transformer-based architecture for generating an amino acid sequence for a protein, given its 3D structure. The authors define custom geometric features, and feed it to a model that has elements of a Transformer and graph convolutional neural network.\", \"the_main_weakness_is_that_the_experiments_section_is_limited\": [\"Direct comparison with graph convolutional neural networks is missing, despite this being a more standard way to do deep learning over graphs.\", \"There should be ablations of the different features explored, and perhaps comparisons to simpler featurization schemes that have been proposed in the past.\", \"There is a comparison with SPIN2, but there are some weird methodological issues, namely that pseudocounts were added post hoc to prevent infinite perplexity. There should be a way to fix numerical stability issues directly, without having to add these pseudocounts.\", \"RNN baselines seem to basically learn unigram frequencies, which either suggests they were not tuned properly or that they were too weak baselines, and some slightly better baselines should also be explored.\", \"The task of mapping structure to amino acid sequence was motivated by the goal of protein generation. However there is no actual evaluation of the generated sequences, only perplexity.\"], \"there_were_also_some_points_of_confusion\": [\"In 2.1, the authors mention that they can handle both \\\"rigid backbone\\\" and \\\"flexible backbone\\\" problems, but then exclusively discuss the rigid case. Since their featurization depends on having the 3D coordinates of all backbone amino acids, which seems to be a hallmark of the \\\"rigid\\\" setting, it is unclear how this extends to the \\\"flexible\\\" setting.\", \"It is unclear how the decoder works, especially because the j-th amino acid sequence is added to the edge features e_{ij}. This would seem to make it hard to use the standard masking trick to decode--do you have to recompute the entire set of features for each step of decoding?\"], \"rating\": \"2: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Well written paper to an interesting & important application of generative models\", \"review\": \"This paper proposes to use an autoregressive Transformer model for the purpose of protein design. The model is well justified and overall the paper is written well. However there are couple of questions I have to authors:\\n\\n1. Since protein is a graph that doesn't have a clear ordering, which ordering to you use in autoregressive decoder ?\\n2. Could you also compare the model with non-deep neural network baselines ?\\n3. How expensive would it be to evaluate the energy of generated proteins ?\", \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}" ] }
rkxkr8UKuN
Perceptual Generative Autoencoders
[ "Zijun Zhang", "Ruixiang Zhang", "Zongpeng Li", "Yoshua Bengio", "Liam Paull" ]
Modern generative models are usually designed to match target distributions directly in the data space, where the intrinsic dimensionality of data can be much lower than the ambient dimensionality. We argue that this discrepancy may contribute to the difficulties in training generative models. We therefore propose to map both the generated and target distributions to the latent space using the encoder of a standard autoencoder, and train the generator (or decoder) to match the target distribution in the latent space. The resulting method, perceptual generative autoencoder (PGA), is then incorporated with maximum likelihood or variational autoencoder (VAE) objective to train the generative model. With maximum likelihood, PGA generalizes the idea of reversible generative models to unrestricted neural network architectures and arbitrary latent dimensionalities. When combined with VAE, PGA can generate sharper samples than vanilla VAE.
[ "pga", "latent space", "maximum likelihood", "vae", "target distributions", "data space", "intrinsic dimensionality", "data", "lower" ]
Accept
https://openreview.net/pdf?id=rkxkr8UKuN
https://openreview.net/forum?id=rkxkr8UKuN
ICLR.cc/2019/Workshop/DeepGenStruct
2019
{ "note_id": [ "HkxrS4dD9V", "SygwfBqM9E", "rJefg9sVF4" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1555690556618, 1555371278995, 1554459113556 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/DeepGenStruct/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper41/AnonReviewer2" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper41/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\"}", "{\"title\": \"perceptual generative autoencoders\", \"review\": \"This work proposes to use autoencoders to learn perceptually meaningful spaces in which to train generative models. Two variants of the framework are introduced, using maximum likelihood training and using a variational approach. This is a fresh take on autoencoders which uses ideas from invertible neural networks to enable training of generative models in latent space.\\n\\nOne of the main strengths of adversarial models seems to be their ability to incorporate strong inductive biases in the loss function (i.e. the convolutional architecture of the discriminator), and this work brings that ability to other types of generative models without requiring any adversarial components (and thus neatly avoiding the instability they bring).\\n\\nThe manuscript would benefit from a slightly extended exposition, and possibly a few diagrams, as Section 2 was a bit hard to follow. As a reader familiar with various different autoencoder paradigms, I had a particular prior about what each component represents, and this work uses these components in new and (initially) counterintuitive ways. Emphasising these differences with prior work, and demonstrating visually where and how each component is used, would improve readability.\\n\\nIt would also be interesting to describe in more detail how alpha, beta, gamma are tuned, what their optimal values tend to look like, and what this means. Ablations would also be interesting. For example, what happens when beta=0? The paragraph before formula (3) provides some motivation for this particular component of the loss function, but it would be interesting to see what happens in practice.\\n\\nStating that the results in Fig. 1 are competitive with GANs is perhaps a bit of a stretch, but overall the results are promising nevertheless, and I am curious to see this idea explored further. It would especially be interesting to see if this helps with scaling up e.g. likelihood-based models and other alternatives to adversarial models.\", \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Convoluted, but an interesting idea\", \"review\": \"This paper relies on autoencoders in order to to distribution matching in high dimensional spaces, with the following recipe:\\n 1) build an autoencoder of the data g(f(x)) - eq 1.\\n 2) build an autoencoder in latent space - ensure that f(g(z)) can reconstruct both z samples from N(0, 1) - eq 2 and f(g(f(x))) - eq 3.\\n 3) Show (under assumptions) that if eq 2 is minimized, for z samples from N(0, 1), if f(g(z)) = f(g(f(x)) for some x in the original data space, then g(z) belongs to the reconstruction distribution. This entails that if the conditions of the theorem are satisfied (namely if f(g(z)) = f(g(f(x)) for some x in the original data space). sample quality will match reconstruction quality. \\n 4) Use 3) to justify that f(g(z)) should have high likelihood in the distribution induced by f(g(x)). Achieve that either by maximum likelihood (approximate - since there is no guarantee that h is invertible) or by variational inference.\", \"equation_5\": \"equation 5 follows from a change of variable and then using that f(x) for x in the data will be normally distributed. What ensures this? Minimizing equation (2) ensures that f(g(z)) will be normally distribution, with z sampled from N(0, 1).\", \"cons\": [\"The paper is very convoluted to read. Notation is missing and the discussion is missing important aspects which is making following the correctness of the paper difficult. I urge the authors to add further discussions and figures.\", \"Parts of the loss function used are rather ad hoc.\", \"The method seems dependent on 3 important hyperparameters. The sensitivity to hyperparameters is not discussed.\"], \"pros\": [\"good empirical results\", \"code is open sourced\"], \"citations\": \"I would add a citation to VEEGAN[1] which also uses distribution matching in latent space.\\n\\n[1] Srivastava, Akash, et al. \\\"Veegan: Reducing mode collapse in gans using implicit variational learning.\\\" Advances in Neural Information Processing Systems. 2017.\", \"rating\": \"3: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
HyeJB8LKOV
Compositional GAN (Extended Abstract): Learning Image-Conditional Binary Composition
[ "Samaneh Azadi", "Deepak Pathak", "Sayna Ebrahimi", "Trevor Darrell" ]
Generative Adversarial Networks (GANs) can produce images of surprising complexity and realism but are generally structured to sample from a single latent source ignoring the explicit spatial interaction between multiple entities that could be present in a scene. Capturing such complex interactions between different objects in the world, including their relative scaling, spatial layout, occlusion, or viewpoint transformation is a challenging problem. In this work, we compose a pair of objects in a conditional GAN framework using a novel self-consistent composition-by-decomposition network. Given object images from two distinct distributions, our model can generate a realistic composite image from their joint distribution following the texture and shape of the input objects. Our results reveal that the learned model captures potential interactions between the two object domains, and can output their realistic composed scene at test time.
[ "abstract", "scene", "compositional gan", "gans", "images", "surprising complexity", "realism", "single latent source" ]
Accept
https://openreview.net/pdf?id=HyeJB8LKOV
https://openreview.net/forum?id=HyeJB8LKOV
ICLR.cc/2019/Workshop/DeepGenStruct
2019
{ "note_id": [ "HkxY9mdPq4", "B1eWsUBNcE", "S1gpjKyptV" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1555690384797, 1555482264760, 1554999717229 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/DeepGenStruct/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper40/AnonReviewer1" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper40/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\"}", "{\"title\": \"interesting problem and reasonable approach\", \"review\": \"The paper tackles the problem of combining two images into one in a sensible way. In particular, the inputs are two objects (e.g., a bottle and a basket) and the output is an image containing both objects (e.g., a bottle in the basket). The challenge is that there aren't enough paired inputs and output. The authors proposed to 1) generate noisy examples by segmentation and inpainting and 2) adding a \\\"self-consistency\\\" loss to encourage that objects occurred in the inputs also occur in the output and segments in the output is close to the corresponding objects in the input. The \\\"self-consistency\\\" loss is also applied at test time to refine the output.\\n\\nI find the problem pretty interesting. The model needs to learn the relative positions of the two objects as well as proper occlusion. The approach is pretty reasonable as well. I only have a couple questions / comments below.\\n\\n- Aside from the paired examples, there is nothing in the loss function encouraging the model to *compose* the inputs with natural occlusion and position etc. So I'd like to see how many paired examples are needed to achieve the reported results, and how the results change with varying numbers of paired examples.\\n\\n- It would be cool to control the relative position of the two objects.\", \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"clear and interesting submission\", \"review\": \"The authors propose a loss function to generate natural images that include two separate objects via a GAN. The loss uses a sort of self-supervision by noting that the decomposition of a natural image with two objects into individual object images should match closely to the original object images. The decomposition and composition network are then learned jointly. Additionally, at test time, the authors provide a loss that tunes pixels to preserve color and texture.\\n\\nI thought this short paper was quite clear (given space constraints) --- the objective was presented and described well.\\n\\nThe authors claim originality that the composition self-consistency loss is a new insight --- I am not familiar with work that conflicts with that claim, though I cannot be certain.\\n\\nQuestions/comments\\n- What are the qualitative and quantitative differences between the $\\\\hat{c}^{after}$ and $\\\\hat{c}^{after}_s$ images? This should be made a bit more clear in the text.\\n- In the CelebA + Glasses experiment, what were the composite images used to train?\", \"rating\": \"5: Top 15% of accepted papers, strong accept\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}" ] }
SJe0ELLKuE
Bias Correction of Learned Generative Models via Likelihood-free Importance Weighting
[ "Aditya Grover", "Jiaming Song", "Ashish Kapoor", "Kenneth Tran", "Alekh Agarwal", "Eric Horvitz", "Stefano Ermon" ]
A learned generative model often gives biased statistics relative to the underlying data distribution. A standard technique to correct this bias is by importance weighting samples from the model by the likelihood ratio under the model and true distributions. When the likelihood ratio is unknown, it can be estimated by training a probabilistic classifier to distinguish samples from the two distributions. In this paper, we employ this likelihood-free importance weighting framework to correct for the bias in using state-of-the-art deep generative models.We find that this technique consistently improves standard goodness-of-fit metrics for evaluating the sample quality of state-of-the-art generative models, suggesting reduced bias. Finally, we demonstrate its utility on representative applications in a) data augmentation for classification using generative adversarial networks, and b) model-based policy evaluation using off-policy data.
[ "importance", "bias", "bias correction", "learned generative models", "samples", "model", "likelihood ratio", "learned generative model", "biased statistics", "data distribution" ]
Accept
https://openreview.net/pdf?id=SJe0ELLKuE
https://openreview.net/forum?id=SJe0ELLKuE
ICLR.cc/2019/Workshop/DeepGenStruct
2019
{ "note_id": [ "HklF74dD5E", "HklZvojr5E", "rklH0mE8KN" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1555690529347, 1555573593280, 1554559949262 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/DeepGenStruct/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper39/AnonReviewer1" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper39/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\"}", "{\"title\": \"Interesting paper, but the experiments could be improved\", \"review\": \"The paper proposes to use importance sampling to debias expectations computed using deep generative models. They demonstrate the usefulness of the idea on tasks such as goodness-of-fit testing, data augmentation and model-based off-policy evaluation. While the underlying ideas have been proposed earlier, I think the combination of ideas (estimating density ratio using a deep neural network and using that to debias deep generative models) is novel and interesting.\", \"major_comments\": [\"The papers by Azadi et al. 2018 and Turner et al. 2018 propose rejection sampling for GANs. Given the similarity between rejection sampling and importance sampling (the latter is a soft-weighted version of the former), I wish the authors had more prominently discussed the connections between their paper and these papers, and empirically compared to rejection sampling in some of their experiments. I also feel noise contrastive estimation deserves a more prominent discussion.\", \"\\u201cSynthetic experiment\\u201d in page 5: how is the uncertainty computed?\", \"Page 6: \\u201censured that the classifiers used were well-calibrated\\u201d how?\", \"The experiments are not very compelling and could be improved. Table 2: Why is D_g + IW so poor?\", \"Importance sampling is known to suffer from high variance. How do you address this?\"], \"minor_comments\": [\"Typo in page 5: \\u201cparameteric\\u201d\"], \"rating\": \"3: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Good idea, good experiments\", \"review\": [\"Pros:\", \"shows improvements in data augmentation, choosing samples from generative models, and model based policy elevation.\", \"well written.\", \"experiments in 3 domains, for evaluation, data augmentation and policy evaluation.\"], \"cons\": [\"missing ablation experiments for what made the density ratio trick work: self normalization, architecture, etc.\", \"for policy evaluation, missing baseline with model free evaluation - by storing the log probs / policy that was used to obtain a particular trajectory, as standard in model free off policy learning.\", \"for the generative model evaluation experiments, the starting point is a pretrained classifier. Would be good to know what happens when a classifier is trained from scratch.\", \"lacking a bigger discussion for what happens when the two distributions lack common support.\"], \"missing_citations\": \"Azadi S, Olsson C, Darrell T, Goodfellow I, Odena A. Discriminator rejection sampling. arXiv preprint arXiv:1810.06758. 2018 Oct 16. - a discussion of using the discriminator in GANs \\nRosca M, Lakshminarayanan B, Mohamed S. Distribution matching in variational inference. arXiv preprint arXiv:1802.06847. 2018 Feb 19. - experimental work showing the failure modes of the density ratio trick.\", \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
r1xaVLUYuE
Understanding Posterior Collapse in Generative Latent Variable Models
[ "James Lucas", "George Tucker", "Roger Grosse", "Mohammad Norouzi" ]
Posterior collapse in Variational Autoencoders (VAEs) arises when the variational distribution closely matches the uninformative prior for a subset of latent variables. This paper presents a simple and intuitive explanation for posterior collapse through the analysis of linear VAEs and their direct correspondence with Probabilistic PCA (pPCA). We identify how local maxima can emerge from the marginal log-likelihood of pPCA, which yields similar local maxima for the evidence lower bound (ELBO). We show that training a linear VAE with variational inference recovers a uniquely identifiable global maximum corresponding to the principal component directions. We provide empirical evidence that the presence of local maxima causes posterior collapse in deep non-linear VAEs. Our findings help to explain a wide range of heuristic approaches in the literature that attempt to diminish the effect of the KL term in the ELBO to reduce posterior collapse.
[ "vae", "posterior collapse", "generative", "generative model", "latent variable", "probabilistic model", "pca", "ppca", "probabilistic pca" ]
Accept
https://openreview.net/pdf?id=r1xaVLUYuE
https://openreview.net/forum?id=r1xaVLUYuE
ICLR.cc/2019/Workshop/DeepGenStruct
2019
{ "note_id": [ "HJeDvVuv5E", "SygIGxKzcE", "rkxqrA8jFV" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1555690591044, 1555365902493, 1554898498018 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/DeepGenStruct/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper37/AnonReviewer2" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper37/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\"}", "{\"title\": \"a different perspective in understanding posterior collapse in VAEs\", \"review\": \"This paper presents a different perspective in understanding posterior collapse in VAEs by studying the connection between probabilistic PCA (pPCA) and linear VAEs. In previous work, it is widely acknowledged that the KL-term plays an important role in posterior collapse. However, in this paper, the authors show theoretically that even the marginal log-likelihood itself could have spurious local optima and for linear VAEs, the ELBO does not add any additional optima to the pPCA model. Experimental results on posterior collapse in non-linear VAEs provide evidence to the analysis.\\n\\nThe analysis in this paper presents a complementary view in helping us better understand posterior collapse. One thing which is not clear to me in the current analysis is how posterior collapse can happen to pPCA if it has the same spurious local optima. Another minor comment is on the MNIST dataset, given the data is practically binary, I wonder if that would also contribute to the preference of having a smaller \\\\sigma. Finally, it would be interesting if the authors could also comment on VAEs with other observational model rather than Gaussian (e.g., Multinomial, as in Krishnan et al. 2018, On the challenges of learning with inference networks on sparse, high-dimensional data).\", \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"The paper draws connections between pPCA and the linear VAE in order to tackle the mode collapse problem. The connections between pPCA and linear VAEs are known\", \"review\": \"This paper draws connections between pPCA and linear VAEs. I would like to argue that part is straighforward, due to the following facts:\\n1. The exact pPCA posterior is a Gaussian whose mean depends linearly on x.\\n2. The variational family is also linear on x; therefore it includes the exact posterior.\\n3. Variational inference minimizes the KL between the variational family and the exact posterior.\\n4. The expectations in the ELBO can be analytically computed.\\nGiven these points, it is easy to conclude that variational inference will find the variational distribution that matches the posterior exactly, therefore recovering pPCA.\\n\\nThe rest of the paper uses the insights from the first part to analyze mode collapse in non-linear VAEs. I found that part more interesting than the former.\", \"rating\": \"3: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
HklaEUUtON
DISENTANGLED STATE SPACE MODELS: UNSUPERVISED LEARNING OF DYNAMICS ACROSS HETEROGENEOUS ENVIRONMENTS
[ "Ðorđe Miladinović", "Waleed Gondal", "Bernhard Schölkopf", "Joachim M. Buhmann", "Stefan Bauer" ]
Sequential data often originates from diverse environments. Across them exist both shared regularities and environment specifics. To learn robust cross-environment descriptions of sequences we introduce disentangled state space models (DSSM). In the latent space of DSSM environment-invariant state dynamics is explicitly disentangled from environment-specific information governing that dynamics. We empirically show that such separation enables robust prediction, sequence manipulation and environment characterization. We also propose an unsupervised VAE-based training procedure to learn DSSM as Bayesian filters. In our experiments, we demonstrate state-of-the-art performance in controlled generation and prediction of bouncing ball video sequences across varying gravitational influences.
[ "State Space Models", "Sequential Data", "Bayesian Filtering", "Amortized Variational Inference", "Disentangled Representations", "Video Analysis" ]
Accept
https://openreview.net/pdf?id=HklaEUUtON
https://openreview.net/forum?id=HklaEUUtON
ICLR.cc/2019/Workshop/DeepGenStruct
2019
{ "note_id": [ "S1e2zNOw94", "SyxdSjqLF4" ], "note_type": [ "decision", "official_review" ], "note_created": [ 1555690516051, 1554586431800 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/DeepGenStruct/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper36/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\", \"comment\": \"Accepted\"}", "{\"title\": \"Weak evaluation, but reasonable for workshop\", \"review\": [\"This paper presents a disentangled generative state space model. By using a global latent variable E the model captures environment-specific information and aims to be disentangled from the rest of the state information of the state space. In a single setting of 2D images of a bouncing ball in a varying gravity settings, this method seems to be yielding reasonable results.\", \"I like the ideas in this paper, and despite a weak evaluation, it is at a reasonable state for a workshop paper.\", \"Fig 6a suggests that E captures well the direction of the gravity but there seems to be additional information (the variation within Fig6a). Have the authors explored what is the information that creeps in E? And, thus, is the model truly disentagled?\", \"The video manipulation experiment is great, but seems to be of qualitative nature. It would be nice to see some quantitative results. For example, the same factors could be changed in the fully simulated environment and the output of the video manipulation experiments compared with the ground truth from the simulation.\", \"Finally, it would have been nice to see more evaluation results on different settings beyond the bouncing ball experiment, both in simulated and real environments. Additionally, a deeper evaluation of the extent to which disentangling happens would be quite useful.\"], \"minor\": \"\\\"This enhances robustness and ability to extrapolate\\\" -> \\\"... the ability\\\"\", \"rating\": \"3: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}" ] }
SJxnVL8YOV
Fully differentiable full-atom protein backbone generation
[ "Namrata Anand", "Raphael Eguchi", "Po-Ssu Huang" ]
The fast generation and refinement of protein backbones would constitute a major advancement to current methodology for the design and development of de novo proteins. In this study, we train Generative Adversarial Networks (GANs) to generate fixed-length full-atom protein backbones, with the goal of sampling from the distribution of realistic 3-D backbone fragments. We represent protein structures by pairwise distances between all backbone atoms, and present a method for directly recovering and refining the corresponding backbone coordinates in a differentiable manner. We show that interpolations in the latent space of the generator correspond to smooth deformations of the output backbones, and that test set structures not seen by the generator during training exist in its image. Finally, we perform sequence design, relaxation, and ab initio folding of a subset of generated structures, and show that in some cases we can recover the generated folds after forward-folding. Together, these results suggest a mechanism for fast protein structure refinement and folding using external energy functions.
[ "GANs", "adversarial training", "proteins" ]
Accept
https://openreview.net/pdf?id=SJxnVL8YOV
https://openreview.net/forum?id=SJxnVL8YOV
ICLR.cc/2019/Workshop/DeepGenStruct
2019
{ "note_id": [ "r1elyNuv5V", "S1emuKXW5E", "HJgWFLdXYV" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1555690456143, 1555278187120, 1554380408915 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/DeepGenStruct/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper35/AnonReviewer1" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper35/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\"}", "{\"title\": \"Nice application of GANs to protein generation.\", \"review\": \"This paper presents an end-to-end approach for generating protein backbones using generative adversarial networks, an important direction in applying deep generative models for generating complex, highly-structured biological data (proteins). First, a generative model (DCGAN) is used to generate pairwise distance matrices between atoms on the backbone. Second, a recover network transforms the distance matrix into the underlying 3D coordinates of the backbone. Finally, the recovered coordinates are post-edited by a recurrent error correction model.\\n\\nIn the experimental section, the authors present a rich set of qualitative evaluations, with examples and case studies demonstrating that the underlying latent space is smooth, and small steps in the latent space of the generator correspond to realistic deformations of the protein backbone. Unseen test examples are also encoded in the latent space. The authors conclude the paper by outlining an approach to iteratively refined generative backbones to host foldable sequences.\\n\\nThe paper is clearly written and the method is novel. My only comment is that it seems the submission lacks quantitive analysis, with most conclusions, were drawn from illustrative case studies. Additionally, the authors could also try using the Rosetta energy function as part of the discriminator, directly optimizing the generated structures to have low energy.\", \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Interesting Graph Generation Paper\", \"review\": \"This work presents a generative model for protein backbones (graphs). A GAN is used to generate a map of all pairwise distances among nodes. Then, an autoencoder-like network tries to place the nodes in the 3D space. Finally a refinement process is used to improve the output. A set of qualitative evaluations suggest positive results.\\n\\nI believe that this paper is interesting to be accepted in the workshop. \\n\\nOverall, many aspects of the paper are related to the application of protein folding that I feel unqualified to judge. I would have found useful a better introduction to the problems in the area to understand the application.\\n\\n* The evaluation seems interesting but I would have hoped for a more quantitative evaluation. Although I do not know of the domain-specific metrics, a common theme in such papers is to compare the probability distributions of some extrinistic characteristics of molecules/proteins between sample in the dataset and generated samples. The current qualitative results are nice and reasonable for a workshop paper, but this leaves no way for future work to compare to this one.\\n\\n* Currently, the generation of the distance maps, the recovery network and the refine modules are trained separately. It would be nice if the authors could explain why they avoided an end-to-end training procedure. Is there an optimization issue? Is this somehow related to the application?\\n\\n* The generated proteins are of fixed length (64). I assume that this isn't true of all proteins. (a) How does this method scale with the size of these proteins? (b) Does the current framework allow for variable-sized graphs?\", \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}" ] }
HJg24U8tuE
Smoothing Nonlinear Variational Objectives with Sequential Monte Carlo
[ "Antonio Moretti", "Zizhao Wang", "Luhuan Wu", "Itsik Pe'er" ]
The task of recovering nonlinear dynamics and latent structure from a population recording is a challenging problem in statistical neuroscience motivating the development of novel techniques in time series analysis. Recent work has focused on connections between Variational Inference and Sequential Monte Carlo for performing inference and parameter estimation on sequential data. Inspired by this work, we present a framework to develop Smoothed Variational Objectives (SVOs) that condition proposal distributions on the full time-ordered sequence of observations. SVO maintains both expressiveness and tractability by sharing parameters of the transition function between the proposal and target. We apply the method to several dimensionality reduction/expansion tasks and examine the dynamics learned with a quantitative metric. SVO performs favorably against the state of the art.
[ "sequential monte carlo", "variational inference", "time series" ]
Accept
https://openreview.net/pdf?id=HJg24U8tuE
https://openreview.net/forum?id=HJg24U8tuE
ICLR.cc/2019/Workshop/DeepGenStruct
2019
{ "note_id": [ "rkxsbN_wqN", "rylXFG749N", "S1eEnWqJqV" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1555690499459, 1555473019337, 1555173803797 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/DeepGenStruct/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper34/AnonReviewer2" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper34/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\", \"comment\": \"This paper studies sequential MC for training deep generative models. The paper is well-written but the authors should connect the work to existing works as mentioned by reviewer 1\"}", "{\"title\": \"Review\", \"review\": \"This paper describes a framework called Smoothed Variational Objectives (SVOs) for performing inference and parameter estimation in nonlinear dynamical systems. The proposed method is evaluated on three benchmarks (Fitzhugh-Nagumo, Lorenz Attractor, and electrophysiology data) in terms of R^2_k, and shows favorable results compared to previous algorithms.\\n\\nOverall, I think this paper is well-written and well-structured. It provides enough background on variational inference and Sequential Monte Carlo methods and is more or less self-contained. Unfortunately, I am not an expert on this topic and won\\u2019t be able to provide more insightful opinions. \\n\\nOther questions/comments:\\nIs the \\\\delta term in Eq defined anywhere?\", \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}", "{\"title\": \"Not new, not smoothing, still an interesting question\", \"review\": \"This paper proposes two methods to extend recent work in filtering SMC-based variational objectives to the smoothing case. The first approach is a Monte Carlo objective (MCO) based on Forward Filtering Backward Smoothing (FFBS), and the second technique gives the SMC proposal distribution access to all observations. The authors evaluate both techniques experimentally and find that the FFBS-based technique does not perform well, while giving the proposal access to all observations improves performance on several tasks.\\n\\nWhile the paper is generally written well and easy to follow, the main technique (giving the proposal access to all observations) has been presented previously in the literature and is misrepresented as a smoothing algorithm when it is not.\", \"to_expand_on_these_points\": \"1. Filtering Variational Objectives (Maddison et al. 2017) section 6.4 presents experiments that run FIVO with a proposal that conditions on the state of a bidirectional RNN run over the observations. They find that it does not reliably help on their tasks.\\n2. Changing the information that the proposal distribution has access to does not change SMC\\u2019s sequence of target distributions. If only the proposal is changed and the form of the weights is not changed, then the algorithm is still based on filtering SMC and is not smoothing. Unfortunately, it is not entirely clear what SMC scheme the authors use with the new proposal. It seems that they use the future-conditioned proposal with the weights defined in equation (12), but if this is not the case it should be clarified.\", \"further_feedback\": \"1. As discussed in Maddison et al. 2017, an MCO based on filtering SMC cannot become tight, even when q is set to the true smoothing distribution. Because of this, it is useful to compare the performance of the proposed algorithm to the IWAE bound (with the same proposal) which can become tight and allow the proposal to make full use of the information available to it. The authors should consider incorporating this comparison in their experiments.\\n2. On page 4 the authors state \\u201cAs with the IWAE, increasing K yields a tighter bound L_SMC defined below\\u201d. I am not aware of a proof that L_SMC is provably tighter as K increases. The authors should provide a proof or citation or remove the statement.\\n3. There is prior work on developing variational objectives based on smoothing SMC, including\", \"graphical_model_inference\": \"Sequential Monte Carlo meets deterministic approximations, Lindsten et al 2018\\n\\nTwisted Variational Sequential Monte Carlo, Lawson et al. 2018\\n\\n\\n\\nThe authors should consider incorporating this in their related works.\\n\\nOverall, it is still interesting to consider why an MCO based on filtering SMC would perform better when the proposal is given access to all observations. If the authors change their paper to address the points above, I will consider changing my score.\", \"rating\": \"2: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
rygiEL8FOV
Context Mover's Distance & Barycenters: Optimal transport of contexts for building representations
[ "Sidak Pal Singh", "Andreas Hug", "Aymeric Dieuleveut", "Martin Jaggi" ]
We present a framework for building unsupervised representations of entities and their compositions, where each entity is viewed as a probability distribution rather than a fixed length vector. In particular, this distribution is supported over the contexts which co-occur with the entity and are embedded in a suitable low-dimensional space. This enables us to consider the problem of representation learning with a perspective from Optimal Transport and take advantage of its numerous tools such as Wasserstein distance and Wasserstein barycenters. We elaborate how the method can be applied for obtaining unsupervised representations of text and illustrate the performance quantitatively as well as qualitatively on tasks such as measuring sentence similarity and word entailment, where we empirically observe significant gains (e.g., 4.1% relative improvement over Sent2vec and GenSen). The key benefits of the proposed approach include: (a) capturing uncertainty and polysemy via modeling the entities as distributions, (b) utilizing the underlying geometry of the particular task (with the ground cost), (c) simultaneously providing interpretability with the notion of optimal transport between contexts and (d) easy applicability on top of existing point embedding methods. In essence, the framework can be useful for any unsupervised or supervised problem (on text or other modalities); and only requires a co-occurrence structure inherent to many problems. The code, as well as pre-built histograms, are available under https://github.com/context-mover.
[ "optimal transport", "Wasserstein distance and barycenters", "representation learning", "NLP", "sentence similarity", "entailment" ]
Accept
https://openreview.net/pdf?id=rygiEL8FOV
https://openreview.net/forum?id=rygiEL8FOV
ICLR.cc/2019/Workshop/DeepGenStruct
2019
{ "note_id": [ "ryevD4dw9N", "HJenDaCGcN", "BylxXI3M9N" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1555690591024, 1555389795567, 1555379735761 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/DeepGenStruct/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper33/AnonReviewer2" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper33/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\"}", "{\"title\": \"Review\", \"review\": \"This paper proposes to construct word embeddings from a histogram over context words, instead of as point vectors, which allows for measuring distances between two words in terms of optimal transport between the histograms. On sentence similarity and word entailment tasks, the method is competitive with previous approaches, although not by a huge margin.\\n\\nThe paper proposes a method to augment representation of an entity (such as a word) from standard \\\"point in a vector space\\\" to a histogram with bins located at some points in that vector space. In this model, the bins correspond the context objects, the location of which are the standard point embedding of those objects, and the histogram weights correspond to the strength of the contextual association. The distance between two representations is then measured with, Context Mover Distance, based on the theory of optimal transport, which is suitable for computing the discrepancy between distributions. \\n\\nPros\\n- Mathematically elegant method to represent words as distributional estimates of context words.\\n- Novel idea to use wasserstein barycenter to measure sentence similarity\\n- Novel idea to use Wasserstein distance for hypernym detection.\", \"cons\": [\"Results do not show significant improvement over baselines.\", \"Potentially complicated for practitioners in the community. Computing CMD and wasserstein barycenters is not trivial and can be inefficient. For this method to be practically useful (and see wide adoption), I believe there has to be a compelling use case for using distributional estimates as oppose to standard point estimates, which isn't demonstrated in the paper. Nevertheless, I believe this paper makes an important contribution.\"], \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"A reasonable proposal for distributionally representing words. Some issues with clarity in writing.\", \"review\": [\"Positives:\", \"The results are straightforward, and compare to the standard baselines (SIF, infersent etc). I think these are not SoTA numbers but the relative comparisons seem fine, especially for a workshop paper. The addition of supplementary ablation experiments is also much appreciated.\", \"The approach is fairly clear, and there's no real big holes in terms of how the problem is set up and solved.\"], \"conceptual\": [\"I'm a bit unhappy with how the entailment experiments were set up. The narrative up to the last paragraph (\\\"For this purpose, ..\\\") is fairly clear that one wants to find distributions that are 'contained in the support' of another. You could directly do this on the PPMI matrix you have and define a measure of 'containment' in terms of distributions. Instead of this, you end up plugging in a (heurisitic) ground metric and computing embeddings. This seems haphazard, and I'm not terribly convinced this makes sense. Are you doing consistency checks like making sure you dont have negative-cost-cycles?\"], \"clarity\": [\"Throughout the paper, there are all sorts of minor unsupported side-remarks and claims that should be stripped. The paper has a fairly straightforward main story and positive results; the addition of these remarks just detract from the rest of the paper. The authors should ask themselves - would I be willing to write a supplemental proof to make this statement precise? or is it just a side-remark I can remove.\", \"The explanation surrounding equation 4 is fairly confusing. I guess what's happening is that you're taking each context, embedding it to some metric space, and weighting it by its bin count. This is quite the sudden leap, since until this point it wasn't really clear that you were going to embed the contexts with a base embedding. Is this an approximation to the actual thing you want to compute (OT over contexts?) or a way to induce the distances? I think its a little bit of both, but the explanation here needs to be more carefully thought out and laid out. I think it would help to make the inputs and outputs to your method precise.\", \"'We also consider adding the information from point estimate into the distributional estimate to get best of both the worlds.' - you should motivate why you want to do this ('best of both worlds' is extremely unclear.. what problem are you solving exactly?). You should then precisely state what you're doing. Your setup Eq (4) is already a bit confusing, so you need to be a bit careful when building on it. I guess the point estimate being added here is the original Glove embedding?\", \"'Since the contexts are dense embeddins' - you really need to explain this. A context is a context, not an embedding - I think the more precise statement is that a context can be mapped to a dense embedding. I assume it's something like, you start with point estimates (i.e. glove vectors) of contexts, so you can treat each context as being assigned an associated vector. You then cluster this, and sum over the clusters. I'm also not sure if summing normalized SPPMI values makes sense as an object. Shouldn't you merge the contexts and then compute the SPPMI again over the 'combined' context? Either way, this part can be made much more clear.\", \"I'm not sure why barycenters is obviously better.. if you have polysemy, you'd want to select the meaning that's implied by all the words, and reject any others. The barycenter does not do this, because you're still incurring costs from the word sense that's not being used in the sentence. I guess it's better than the alternatives?\", \"In the paragraph connecting SIF and CoMB - i have no idea what the precise connection is. You should write down propositions and equations for any precise statements like this one (or remove it).\"], \"minor_comment\": [\"'Also, KL-divergence isn\\u2019t defined when the supports of distributions under comparison don\\u2019t fully overlap' - is false, you only need absolute continuity, meaning you don't need full overlap - just that the support of the distribution inside the log must be a subset of the support of the distribution outside the log.\", \"'Hence, one potential application could be in checking for the implicit bias in point estimates (Bolukbasi et al., 2016) and then correcting it via the ground cost.' - this is a pretty vacuous statement - you could have compared pairwise distances for word embeddings to correct it, for example. I honestly think the throwaway comments like this one and the one above should just be stripped from the paper.\", \"The section 'Relation between the histogram and point estimates.' is similarly vacuous. Yes, count based methods use histograms and neural networks use vectors. Yes, your paper kind of uses both. I really do not think pointing this out adds much insight to your paper. It may be that you had something more profound to say, but it doesn't quite come though.\", \"'A practical take-home message of this work thus is to not throw away the co-occurrence information e.g. when using GloVe, but to instead pass it on to our method.' - should be moved to the discussion.\", \"You may also want to put the timings in the experiment - runtimes are somewhat useless without matching accuracy numbers for approximate algorithms such as sinkhorn. What's the relative (percent) error on your wasserstein distance estimates?\", \"' In fact, Figure 3 says it all,' - in fact, figure 3 does not say it all because it's a particular example projected into 2d without very much explanation. In fact i'd argue that it's not a terribly enlightening - what is the ground truth supposed to look like? why is euclidean averaging bad?\"], \"rating\": \"3: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}" ] }
rJlo4UIt_E
Discrete Flows: Invertible Generative Models of Discrete Data
[ "Dustin Tran", "Keyon Vafa", "Kumar Agrawal", "Laurent Dinh", "Ben Poole" ]
While normalizing flows have led to significant advances in modeling high-dimensional continuous distributions, their applicability to discrete distributions remains unknown. In this paper, we show that flows can in fact be extended to discrete events---and under a simple change-of-variables formula not requiring log-determinant-Jacobian computations. Discrete flows have numerous applications. We display proofs of concept under 2 flow architectures: discrete autoregressive flows enable bidirectionality, allowing for example tokens in text to depend on both left-to-right and right-to-left contexts in an exact language model; and discrete bipartite flows (i.e., with layer structure from RealNVP) enable parallel generation such as exact nonautoregressive text modeling.
[ "discrete data", "normalizing flows", "autoregressive", "realnvp" ]
Accept
https://openreview.net/pdf?id=rJlo4UIt_E
https://openreview.net/forum?id=rJlo4UIt_E
ICLR.cc/2019/Workshop/DeepGenStruct
2019
{ "note_id": [ "H1erbV_w54", "BJljG_pfqV", "ryxodAOzqE" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1555690493315, 1555384339143, 1555365491349 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/DeepGenStruct/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper32/AnonReviewer1" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper32/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\"}", "{\"title\": \"Interesting preliminary work\", \"review\": \"This paper proposes a version of normalizing flows applicable to discrete data. The authors motivate this idea in two ways: (1) autoregressive models that are bidirectional, and (2) non-autoregressive likelihood-based models of discrete data. They show that flows on discrete data do not need an expensive log-det-Jacobian step. However, they require a function mapping from discrete variables to discrete variables. As such a function is not directly differentiable, this paper uses a straight-through gradient estimator.\\n\\nThis is an interesting idea but is not that well fleshed out. A more thorough discussion of calculating gradients through the discrete outputs of their neural networks seems in order. Additionally, the experiments shown are on very small toy data.\", \"pros\": [\"Interesting idea\", \"Clear, no-nonsense writing\"], \"cons\": [\"No experiments on real data\", \"Straight-through estimator seems limiting (e.g. not sure how it applies to non-ordinal data)\"], \"rating\": \"3: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Normalizing flows for discrete distributions\", \"review\": \"The paper extends the concept of normalizing flows in modeling high-dimensional continuous distributions to discrete distributions.\\nIt is true that the paper only presents a proof of concept with few toy experiments, it is well written and does a good job motivating discrete flows. However, a real application of these models to model large-scale text data with large vocabulary is still an open question.\", \"rating\": \"3: Marginally above acceptance threshold\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}" ] }
Syx9EIIKdN
Adversarial Mixup Resynthesizers
[ "Christopher Beckham", "Sina Honari", "Alex Lamb", "Vikas Verma", "Farnoosh Ghadiri", "R Devon Hjelm", "Christopher Pal" ]
In this paper, we explore new approaches to combining information encoded within the learned representations of autoencoders. We explore models that are capable of combining the attributes of multiple inputs such that a resynthesised output is trained to fool an adversarial discriminator for real versus synthesised data. Furthermore, we explore the use of such an architecture in the context of semi-supervised learning, where we learn a mixing function whose objective is to produce interpolations of hidden states, or masked combinations of latent representations that are consistent with a conditioned class label. We show quantitative and qualitative evidence that such a formulation is an interesting avenue of research.
[ "autoencoders", "adversarial", "GANs" ]
Accept
https://openreview.net/pdf?id=Syx9EIIKdN
https://openreview.net/forum?id=Syx9EIIKdN
ICLR.cc/2019/Workshop/DeepGenStruct
2019
{ "note_id": [ "S1xl-NdP5E", "Ske0SU2HqV", "Syx7meRz54" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1555690488399, 1555576389848, 1555386395493 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/DeepGenStruct/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper31/AnonReviewer2" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper31/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\", \"comment\": \"The paper's results are interesting but the reviewers note that the loss is not well motivated theoretically. It would be good to explore this aspect further.\"}", "{\"title\": \"OK, but not good enough yet\", \"review\": \"This paper proposes to improve representations learned by autoencoders by applying ideas from adversarial learning and mixup.\\n\\nI think this is an interesting area of research, however I think the current draft is not ready for acceptance yet.\\n\\nToo many moving parts in equations (5) and (6) which makes it difficult to understand why the method works.\", \"some_of_the_design_choices_seem_arbitrary_without_any_clear_discussion\": \"e.g. why Bernoulli mixup instead of usual mixup?\\nThe mixing consistency loss in (5) is not explained clearly, and seems like a hacky fix. \\n\\nI was expecting to see at least some discussion of why this loss is interesting from a theoretical perspective and how it\\u2019s different from existing methods for regularizing auto-encoders. Difference from closely related work (e.g. Berthelot et al. 2019) is not clearly discussed. The authors defer to supplementary material for key details and even the discussions in sections 5.2 and 5.3 of supplementary material do not convincingly address these questions.\", \"results_in_table_1\": \"proposed method doesn\\u2019t seem to be significantly outperform ACAI.\", \"results_in_table_2\": \"why no comparison to ACAI?\", \"rating\": \"2: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Strong results in need of a little theory\", \"review\": \"The authors propose to construct a generative model by training an autoencoder with a structured latent space. In this latent space, new datapoints can be generated by interpolating between existing datapoints, either linearly or by masking and replacing. They demonstrate that this model performs well at interpolating between data and that it can make use of supervised data attributes.\\n\\nMy main lingering question after reading the paper is about how exactly the method works theoretically. That is, is there a single objective which this method optimizes? Exactly what properties should we expect the latent space to have?\", \"minor_note\": \"On the first line of text after Equation 3, it says lambda where I believe it was meant to say alpha.\", \"pros\": [\"Good results\", \"Strong empirical exploration\"], \"cons\": [\"Limited theory and complicated objective function sheds little light on the mechanism of the method's success\"], \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}" ] }
HklKEUUY_E
On the relationship between Normalising Flows and Variational- and Denoising Autoencoders
[ "Alexey A. Gritsenko", "Jasper Snoek", "Tim Salimans" ]
Normalising Flows (NFs) are a class of likelihood-based generative models that have recently gained popularity. They are based on the idea of transforming a simple density into that of the data. We seek to better understand this class of models, and how they compare to previously proposed techniques for generative modeling and unsupervised representation learning. For this purpose we reinterpret NFs in the framework of Variational Autoencoders (VAEs), and present a new form of VAE that generalises normalising flows. The new generalised model also reveals a close connection to denoising autoencoders, and we therefore call our model the Variational Denoising Autoencoder (VDAE). Using our unified model, we systematically examine the model space between flows, variational autoencoders, and denoising autoencoders, in a set of preliminary experiments on the MNIST handwritten digits. The experiments shed light on the modeling assumptions implicit in these models, and they suggest multiple new directions for future research in this space.
[ "variational autoencoders", "denoising variational autoencoders", "normalizing flows", "generative modelling", "image synthesis", "denoising autoencoders", "VAE", "DAE", "VDAE", "NF" ]
Accept
https://openreview.net/pdf?id=HklKEUUY_E
https://openreview.net/forum?id=HklKEUUY_E
ICLR.cc/2019/Workshop/DeepGenStruct
2019
{ "note_id": [ "Bkeix4_vqN", "BygFOVRG5E", "Syep7Jnz5E" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1555690482666, 1555387505022, 1555377957514 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/DeepGenStruct/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper30/AnonReviewer2" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper30/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\", \"comment\": \"Accepted\"}", "{\"title\": \"Interesting exposition of unifying NFs, VAEs, and DAEs\", \"review\": \"This paper proposes a model family that unifies NFs, VAEs and DAEs. It also introduces an extension of this model that allows for using non-invertible encoders (e.g. projection to a smaller dimensionality) and discrete data. Overall, the idea is promising, but the empirical results are not strong enough to warrant a strong commendation.\\n\\nPros\\n- The proposed model that blends NFs, VAEs and DAEs is original, and generalises over standard NFs in that it allows non-zero noise levels.\\n- When latent dimensionality is smaller than the input space, the model consistently outperforms the VAE baseline.\\n\\nCons\\n- Performance deteriorates for bigger latent dimensionalities (e.g. when n=m)\", \"rating\": \"3: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"review\", \"review\": \"This paper applies normalizing flows into denoising autoencoders, and derives a variational lower bound when the posterior has this form. Experiments on MNIST show improvements over VAE, but worse than other NF models.\", \"pros\": \"The paper is well written and easy to follow. It does a good job in reviewing related work.\", \"cons\": \"While it combines VAE, NF and DAE, no particular novel technique is introduced, and it\\u2018s no better than existing models, so the significance of the proposed framework is unclear. In addition, the expensive complexity of L-VDAE makes it difficult to scale to high-dimensional data. Nevertheless, the topic is very relevant and I think it's worth discussing at this workshop.\", \"rating\": \"3: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}" ] }
rJgON8ItOV
Visualizing and Understanding GANs
[ "David Bau", "Jun-Yan Zhu", "Hendrik Strobelt", "Bolei Zhou", "Joshua B. Tenenbaum", "William T. Freeman", "Antonio Torralba" ]
We present an analytic framework to visualize and understand GANs at the unit-, object-, and scene-level. We first identify a group of interpretable units that are closely related to object concepts with a segmentation-based network dissection method. Then, we examine the causal effect of interpretable units by measuring the ability of interventions to control objects in the output. Finally, we examine the contextual relationship between these units and their surrounding by inserting the discovered object concepts into new images. We show several practical applications enabled by our framework, from comparing internal representations across different layers and models, to improving GANs by locating and removing artifact-causing units, to interactively manipulating objects in the scene.
[ "GAN", "visualization", "interpretable", "segmentation", "causality" ]
Accept
https://openreview.net/pdf?id=rJgON8ItOV
https://openreview.net/forum?id=rJgON8ItOV
ICLR.cc/2019/Workshop/DeepGenStruct
2019
{ "note_id": [ "HkeMlNOvc4", "SJek4zZ45V", "r1eT4TcCtE" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1555690474254, 1555464742783, 1555111221119 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/DeepGenStruct/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper28/AnonReviewer2" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper28/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\"}", "{\"title\": \"nice work in general\", \"review\": \"This work describes a simple method to visualize and understanding GANS. The method starts with identifying semantic object classes in an image (with a supervised semantic segmentation network) and then manipulates the network units to study the impact. It shows cool applications of object-level control of images.\\n\\nMy main concern of this approach is I'm not sure if the method is image-specific or not. If it is, that means for the same object occurring in different images, there will be different network units responsible. If this is the case there wouldn't be much conclusion which can be drawn from the results.\", \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Interesting topic and carefully-designed experiments\", \"review\": \"This paper tries to ask and answer a challenging question: What does a GAN know? This is an interesting topic, and the authors proposed a simple method that can be used to visualize and understand the internal representations that GAN has learned.\", \"pros\": \"The experimental results are very interesting. For example, by using the proposed approach, one can identify GAN units that match trees, and therefore one can ablate the corresponding units to remove tree, or activate the corresponding units to add tree. The authors also provide many other quantitative results in order to better understand GAN. The provided video link is also very interesting. It provides a kind of controllable manner to manipulate generated images.\", \"cons\": \"Though the proposed method looks simple, I find it a little bit difficult for me to follow the details. More clarity is needed.\", \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}" ] }
SyxPVLUYdE
Revisiting Auxiliary Latent Variables in Generative Models
[ "Dieterich Lawson", "George Tucker", "Bo Dai", "Rajesh Ranganath" ]
Extending models with auxiliary latent variables is a well-known technique to in-crease model expressivity. Bachman & Precup (2015); Naesseth et al. (2018); Cremer et al. (2017); Domke & Sheldon (2018) show that Importance Weighted Autoencoders (IWAE) (Burda et al., 2015) can be viewed as extending the variational family with auxiliary latent variables. Similarly, we show that this view encompasses many of the recent developments in variational bounds (Maddisonet al., 2017; Naesseth et al., 2018; Le et al., 2017; Yin & Zhou, 2018; Molchanovet al., 2018; Sobolev & Vetrov, 2018). The success of enriching the variational family with auxiliary latent variables motivates applying the same techniques to the generative model. We develop a generative model analogous to the IWAE bound and empirically show that it outperforms the recently proposed Learned Accept/Reject Sampling algorithm (Bauer & Mnih, 2018), while being substantially easier to implement. Furthermore, we show that this generative process provides new insights on ranking Noise Contrastive Estimation (Jozefowicz et al.,2016; Ma & Collins, 2018) and Contrastive Predictive Coding (Oord et al., 2018).
[ "variational inference", "monte carlo objectives", "VAE", "IWAE", "sampling", "contrastive predictive coding", "CPC", "noise contrastive estimation", "NCE", "auxiliary variable variational inference", "generative modeling", "energy-based models" ]
Accept
https://openreview.net/pdf?id=SyxPVLUYdE
https://openreview.net/forum?id=SyxPVLUYdE
ICLR.cc/2019/Workshop/DeepGenStruct
2019
{ "note_id": [ "rklxfPosc4", "BklBXunLqN", "Hye8mfl7cE" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1555965704266, 1555642397143, 1555395101550 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/DeepGenStruct/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper25/AnonReviewer2" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper25/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\"}", "{\"title\": \"Review\", \"review\": \"General:\\n\\nThis paper revisits auxiliary latent variable formulation of variational inference. Inspired by that, the authors develop a generative model based on self-normalized importance sampling (SNIS), and connect it to recent approaches such as NCE and CPC. The view is very interesting. In experiments on MNIST, SNIS combined with VAE framework outperforms recently proposed LARS, while being faster and computationally cheaper.\", \"pros\": [\"This paper provides a unified view of variational lower bound through auxiliary latent variables (this is not new though), relates that to the generative model side, and proposes a self-normalized importance sampling process as a generative model. This new method called SNIS can be connected with NCE and CPC. As mentioned in the paper, a unified view over different approaches might provide insights for future research\", \"While only evaluated in the VAE context, the method can be potentially general and effective for other settings (as mentioned in the LARS paper).\"], \"cons\": [\"I would like to see more experiments under different settings to show efficacy of the method\"], \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"An interesting view of some recent work on improving variational bounds\", \"review\": \"This paper provides a different view of some recent work on improving variational bounds through auxiliary latent variable models. This connection gives some new insights in the existing work, e.g., IWAE, ranking NCE, and CPC. The paper also explores the possibility of using auxiliary latent variable models in the generative model.\\n\\nThe research in this paper is still in its early stage and it would be interesting to see how some of the unanswered questions in the current paper can be addressed.\", \"rating\": \"3: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}" ] }
Hkl8EILFdN
Interactive Visual Exploration of Latent Space (IVELS) for peptide auto-encoder model selection
[ "Tom Sercu", "Sebastian Gehrmann", "Hendrik Strobelt", "Payel Das", "Inkit Padhi", "Cicero Dos Santos", "Kahini Wadhawan", "Vijil Chenthamarakshan" ]
We present a tool for Interactive Visual Exploration of Latent Space (IVELS) for model selection. Evaluating generative models of discrete sequences from a continuous latent space is a challenging problem, since their optimization involves multiple competing objective terms. We introduce a model-selection pipeline to compare and filter models throughout consecutive stages of more complex and expensive metrics. We present the pipeline in an interactive visual tool to enable the exploration of the metrics, analysis of the learned latent space, and selection of the best model for a given task. We focus specifically on the variational auto-encoder family in a case study of modeling peptide sequences, which are short sequences of amino acids. This task is especially interesting due to the presence of multiple attributes we want to model. We demonstrate how an interactive visual comparison can assist in evaluating how well an unsupervised auto-encoder meaningfully captures the attributes of interest in its latent space.
[ "visualization", "peptide", "interactive", "auto-encoder", "vae" ]
Accept
https://openreview.net/pdf?id=Hkl8EILFdN
https://openreview.net/forum?id=Hkl8EILFdN
ICLR.cc/2019/Workshop/DeepGenStruct
2019
{ "note_id": [ "HylSxVOPcE", "Hyxp2iQZcE", "Hygw-kVKtE" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1555690477033, 1555278772551, 1554755326980 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/DeepGenStruct/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper24/AnonReviewer1" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper24/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\", \"comment\": \"Accepted\"}", "{\"title\": \"tool for analysing vae over discrete data\", \"review\": \"The paper presents a tool to analyse various aspects of model trained using the VAE (amortised VI) framework for discrete data. VAE framework is known to be prone to several learning related issues such as slow convergence, posterior collapse, etc. therefore such a tool could provide a significant insight in tuning the model as well as selecting a model that best suits the needs.\", \"rating\": \"3: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Interesting visualization tool for generative models\", \"review\": \"This paper proposed a tool for visualizing the latent spaces for generative models. The authors demonstrated this tool by applying this tool to show some visualizations of some auto-encoders (VAE, WAE and AAE).\", \"pros\": \"1. The paper is easy to follow and the visualization results are clear. It is easy to understand the metrics from the figures of the visualization tool.\\n\\n2. The visualized metrics are useful and can show the differences between the WAE and the AAE compared with the VAE, which means that the visualizations are successful.\\n\\n3. The visualization tool can also analyze the attributes in the latent space, which is useful.\", \"cons\": \"1. It is better if more generative models (e.g. GAN) can be studied using this tool.\", \"rating\": \"3: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}" ] }
HylINLUKuV
WiSE-ALE: Wide Sample Estimator for Aggregate Latent Embedding
[ "Shuyu Lin", "Ronald Clark", "Robert Birke", "Niki Trigoni", "Stephen Roberts" ]
In this paper, we present a new generative model for learning latent embeddings. Compared to the classical generative process, where each observed data point is generated from an individual latent variable, our approach assumes a global latent variable to generate the whole set of observed data points. We then propose a learning objective that is derived as an approximation to a lower bound to the data log likelihood, leading to our algorithm, WiSE-ALE. Compared to the standard ELBO objective, where the variational posterior for each data point is encouraged to match the prior distribution, the WiSE-ALE objective matches the averaged posterior, over all samples, with the prior, allowing the sample-wise posterior distributions to have a wider range of acceptable embedding mean and variance and leading to better reconstruction quality in the auto-encoding process. Through various examples and comparison to other state-of-the-art VAE models, we demonstrate that WiSE-ALE has excellent information embedding properties, whilst still retaining the ability to learn a smooth, compact representation.
[ "Generative models", "Latent variable models", "Variational inference", "Auto-encoder", "Representation learning" ]
Accept
https://openreview.net/pdf?id=HylINLUKuV
https://openreview.net/forum?id=HylINLUKuV
ICLR.cc/2019/Workshop/DeepGenStruct
2019
{ "note_id": [ "HJx6xEdP5N", "HklOGRbmc4", "r1xeyL3fc4" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1555690484548, 1555402256305, 1555379672400 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/DeepGenStruct/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper23/AnonReviewer2" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper23/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\"}", "{\"title\": \"solid work\", \"review\": \"This paper derives a variational lower bound to the data log likelihood, which allows us to\\nimpose a prior constraint on the bulk statistics of the aggregate posterior distribution for the entire\\ndataset. The analysis shows that the proposed method achieves better reconstruction quality, as well as forming a smooth, compact and meaningful latent representation. I would like to accept the paper for the workshop.\", \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"A new generative process for VAE\", \"review\": \"This paper presents a new generative model for learning latent embeddings. Compared with the classical generative process, where each observed data point is generated from an individually sampled latent representation from thr prior distribution, the proposed approach assumes a single sampled latent representation to generate the whole set of observed data points.\\n\\nThe authors then propose a learning objective without guarantee of boundness. Compared with the standard ELBO objective, where the variational posterior for each data point is encouraged to matched with the prior distribution, the proposed objective matches the averaged posterior of all samples with the prior, allowing the posterior distribution to have a wider range of acceptable embedding mean and variance, as long as the averaged value over all samples is close to a standard Gaussian (Fig. 3 and 4).\\n\\nThis is a nice submission with reasonable technical contribution and empirical evaluation (against beta-VAE, classical AEVB and WAE) as a workshop paper. It would be great if the authors could further analyze the error introduced in the approximation by Jensen inequality, as outlined in the future work. Meanwhile, a quantitative analysis against WAE (instead of only presenting illustrative examples in Fig. 5(b)) would also be helpful.\", \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}" ] }
ryeBN88Ku4
Generating Diverse High-Resolution Images with VQ-VAE
[ "Ali Razavi", "Aaron van den Oord", "Oriol Vinyals" ]
We explore the use of Vector Quantized Variational AutoEncoder (VQ-VAE) models for large scale image generation. To this end, we scale and enhance the autoregressive priors used in VQ-VAE to generate synthetic samples of much higher coherence and fidelity than possible before. We use simple feed-forward encoder and decoder networks, thus our model is an attractive candidate for applications where the encoding and decoding speed is critical. Additionally, this allows us to only sample autoregressively in the compressed latent space, which is an order of magnitude faster than sampling in the pixel space, especially for large images. We demonstrate that a multi-scale hierarchical organization of VQ-VAE, augmented with powerful priors over the latent codes, is able to generate samples with quality that rivals that of state of the art Generative Adversarial Networks on multifaceted datasets such as ImageNet, while not suffering from GAN's known shortcomings such as mode collapse and lack of diversity.
[ "Vector Quantization", "Autoregressive models", "Generative Models" ]
Accept
https://openreview.net/pdf?id=ryeBN88Ku4
https://openreview.net/forum?id=ryeBN88Ku4
ICLR.cc/2019/Workshop/DeepGenStruct
2019
{ "note_id": [ "ryxglVuv9V", "S1xrQYsfq4", "r1xK9aFAY4" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1555690471709, 1555376412737, 1555107217291 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/DeepGenStruct/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper22/AnonReviewer2" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper22/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\", \"comment\": \"The generated images are impressive but as the reviewers note, it would be good to have quantitative comparison against existing methosd.\"}", "{\"title\": \"Review\", \"review\": \"The paper proposes a method for generating diverse high resolution images with vector-quantised autoencoders (VQ-VAEs). The approach can generate images with much higher visual fidelity than the original VQ-VAE paper via two main ingredients: (1) hierarchical multi-scale latent maps and (2) PixelSNAIL instead of PixelCNN.\\n\\nThe motivation and the contributions of this paper are very similar to De Fauw et al., 2019 (Hierarchical Autoregressive Image Models with Auxiliary Decoders; https://arxiv.org/abs/1903.04933), in that De Fauw et al. also used hierarchical VQ-VAEs (specifically, 2-layer) to generate high fidelity images. Also their autoregressive priors are closely related to PixelSNAIL. I'm assuming the authors were not aware of this paper, as they did not cite it. De Fauw et al. report IS/FID/Test NLL, none of which this paper reports. Given De Fauw et al. 2019 was submitted to arxiv on March 9th, this should be considered concurrent work, but should be cited in the revision.\\n\\nPros\\n- Clear exposition and motivation\\n- High fidelity and diversity in the generated images\\n\\nCons\\n- No test NLL comparisons with other likelihood based approaches (e.g. SPN, Parallel Multiscale)\\n- I'd have liked to see Inception/FID results from the proposed model (as done by De Fauw et al., 2019).\", \"rating\": \"2: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Well written paper with great looking generated images\", \"review\": \"This paper proposes to use Hierarchical VQ-VAE for the purposes of large image generation. The paper is written clearly and well justified.\\n\\nThe authors extend the originally proposed VQ-VAE model to learn two (top & bottom) level hierarchies of images. The only con of the model is that, post-hoc PixelCNN (or PixelSnail in this paper) needs to be used to learn the prior over discrete codes in order to sample images at generation time.\\n\\nAlthough authors claim that the model generates diverse & high quality looking it would be great to put some quantitative number on it. Doing with side-by-side samples from BigGAN and Hierarchical VQ-VAE and asking people to rate which models generated samples they prefer. As well as it would be great to see the nearest neighboring training images from the dataset according to closest distance in the embedding space.\", \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}" ] }
H1gHELLK_V
Learning to Defense by Learning to Attack
[ "Zhehui Chen", "Haoming Jiang", "Yuyang Shi", "Bo Dai", "Tuo Zhao" ]
Adversarial training provides a principled approach for training robust neural networks. From an optimization perspective, the adversarial training is essentially solving a minmax robust optimization problem. The outer minimization is trying to learn a robust classifier, while the inner maximization is trying to generate adversarial samples. Unfortunately, such a minmax problem is very difficult to solve due to the lack of convex-concave structure. This work proposes a new adversarial training method based on a general learning-to-learn framework. Specifically, instead of applying the existing hand-design algorithms for the inner problem, we learn an optimizer, which is parametrized as a convolutional neural network. At the same time, a robust classifier is learned to defense the adversarial attack generated by the learned optimizer. From the perspective of generative learning, our proposed method can be viewed as learning a deep generative model for generating adversarial samples, which is adaptive to the robust classification. Our experiments demonstrate that our proposed method significantly outperforms existing adversarial training methods on CIFAR-10 and CIFAR-100 datasets.
[ "Adversarial Training", "Learning to Learn/Optimize", "Nonconvex-Nonconcave Minmax Optimization" ]
Accept
https://openreview.net/pdf?id=H1gHELLK_V
https://openreview.net/forum?id=H1gHELLK_V
ICLR.cc/2019/Workshop/DeepGenStruct
2019
{ "note_id": [ "H1xPWEOP9E", "Sye8z12W5E", "Hyevqfi-5V" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1555690495199, 1555312398102, 1555309198538 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/DeepGenStruct/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper21/AnonReviewer2" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper21/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\"}", "{\"title\": \"Very Interesting Idea\", \"review\": \"This paper proposes a very interesting idea, using the learning-to-learn framework to learn an attacker. I find this idea very novel in the literature and in retrospect, very natural. Furthermore, I believe using L2L framework to this adversarial setting is very promising as we can naturally generate many samples to fit L2L framework.\\n\\nThe experiments also look promising. I think this is a strong paper.\", \"rating\": \"5: Top 15% of accepted papers, strong accept\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Good paper, though of unclear relevance\", \"review\": \"This paper proposes a way to train image classification models to be resistant to L-infinity perturbation attacks. The idea is to simultaneously learn the classification model and an adversary model that adds L-infinity-bounded perturbations to images, in order to maximally confuse the first model. This adversary model can use not only the image itself, but also gradient information from the classification model. It can even propose a perturbation, get gradient information on that perturbation, and then propose an updated perturbation, similarly to how projected gradient descent (PGD) can take multiple gradient steps to find a perturbation.\\n\\nThe main results are that training in this way improves adversarial accuracy compared to PGD, while improving training speed. On CIFAR-10 with epsilon=0.03, the proposed method gets 51.5% accuracy against a PGD adversary, whereas the PGD-trained model gets 40.7% accuracy. Madry et al. (2017) report better accuracy of 47%, but the proposed method still improves upon this. Moreover, the model-based adversary is faster than PGD, as shown by faster training times (more than 2x faster to get similar accuracy as PGD, and the best model is still about 50% faster).\\n\\nOverall, I believe the paper is above the acceptance threshold from a quality perspective, and likely in the top 50% of accepted papers. However, it may not be a good fit for the topic of this workshop. Technically you could argue that the adversary is synthesizing a perturbation to an image, so this is some sort of structured generation. Therefore I give an overall rating of 3, and defer to the workshop organizers regarding appropriateness.\", \"minor_note\": \"In Algorithm 2, I think g(x_i, u_i; \\\\phi) should use the \\\\mathcal{A} notation used elsewhere.\", \"rating\": \"3: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}" ] }
S1lNELLKuN
AlignFlow: Cycle Consistent Learning from Multiple Domains via Normalizing Flows
[ "Aditya Grover", "Christopher Chute", "Rui Shu", "Zhangjie Cao", "Stefano Ermon" ]
The goal of unpaired cross-domain translation is to learn useful mappings between two domains, given unpaired sets of datapoints from these domains. While this formulation is highly underconstrained, recent work has shown that it is possible to learn mappings useful for downstream tasks by encouraging approximate cycle consistency in the mappings between the two domains [Zhu et al., 2017]. In this work, we propose AlignFlow, a framework for unpaired cross-domain translation that ensures exact cycle consistency in the learned mappings. Our framework uses a normalizing flow model to specify a single invertible mapping between the two domains. In contrast to prior works in cycle-consistent translations, we can learn AlignFlow via adversarial training, maximum likelihood estimation, or a hybrid of the two methods. Theoretically, we derive consistency results for AlignFlow which guarantee recovery of desirable mappings under suitable assumptions. Empirically, AlignFlow demonstrates significant improvements over relevant baselines on image-to-image translation and unsupervised domain adaptation tasks on benchmark datasets.
[ "alignflow", "domains", "translation", "cycle consistent", "multiple domains", "unpaired", "framework", "flows", "normalizing", "goal" ]
Accept
https://openreview.net/pdf?id=S1lNELLKuN
https://openreview.net/forum?id=S1lNELLKuN
ICLR.cc/2019/Workshop/DeepGenStruct
2019
{ "note_id": [ "HkgSzVdw9N", "rkg9el6B9V", "H1eSu7BLFN" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1555690509190, 1555578865863, 1554563948730 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/DeepGenStruct/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper20/AnonReviewer1" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper20/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\"}", "{\"title\": \"Great paper\", \"review\": \"The paper proposes AlignFlow, an efficient way of implementing cycle consistency principle using invertible flows. The paper is clearly written and I really enjoyed reading it!\", \"pros\": [\"Clever combination of existing ideas (use invertible mappings rather than encoder-decoder pairs in cycleGAN)\", \"simple to implement\", \"works well in practice\"], \"this_paper_proposes_a_related_idea_and_might_be_worth_discussing\": \"\", \"invertible_autoencoder_for_domain_adaptation_https\": \"//arxiv.org/pdf/1802.06869.pdf\", \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Taking cycle consistency to a new level\", \"review\": \"This paper shows how to use flow models for unpaired image to image translation, by leveraging the invertbility of flows, by sharing a common latent space between two models which map from this latent space into the two domains of interest.\", \"pros\": [\"some nice guarantees due to the invertbility properties of flows\", \"good empirical results - showing some of the baselines in the cycleGAN paper.\", \"building on top of prior work which uses adversarial training to train flows.\"], \"cons\": [\"RNVPs require more computation to achieve high quality samples, due to the local structure of the model and the reliance on checkerboard and channel alternating patterns. There is no discussion on the model size in AlignFlow and that required by CycleGAN.\", \"lacking some of the most impressive results from cyclegan.\"], \"question_for_the_authors\": [\"in my experience, RNVPs are quite fiddly to train. I expect that when adversarial training is added to the mix, things get even more fiddly. What did you do to stabilize your model?\", \"how sensitive is the model to the choice of 'lambda_a and \\\\lambda_b? I could not find any discussion on that - nor the values for \\\\lambda_a and \\\\lambda_b in the paper.\"], \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
HJg4E8IFdE
Adjustable Real-time Style Transfer
[ "Mohammad Babaeizadeh", "Golnaz Ghiasi" ]
Artistic style transfer is the problem of synthesizing an image with content similar to a given image and style similar to another. Although recent feed-forward neural networks can generate stylized images in real-time, these models produce a single stylization given a pair of style/content images, and the user doesn't have control over the synthesized output. Moreover, the style transfer depends on the hyper-parameters of the model with varying ``optimum" for different input images. Therefore, if the stylized output is not appealing to the user, she/he has to try multiple models or retrain one with different hyper-parameters to get a favorite stylization. In this paper, we address these issues by proposing a novel method which allows adjustment of crucial hyper-parameters, after the training and in real-time, through a set of manually adjustable parameters. These parameters enable the user to modify the synthesized outputs from the same pair of style/content images, in search of a favorite stylized image. Our quantitative and qualitative experiments indicate how adjusting these parameters is comparable to retraining the model with different hyper-parameters. We also demonstrate how these parameters can be randomized to generate results which are diverse but still very similar in style and content.
[ "Style transfer", "Generative models" ]
Accept
https://openreview.net/pdf?id=HJg4E8IFdE
https://openreview.net/forum?id=HJg4E8IFdE
ICLR.cc/2019/Workshop/DeepGenStruct
2019
{ "note_id": [ "H1liATOP5V", "ryeccJgV94", "rye7oAUf5E" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1555693010755, 1555459985664, 1555357338875 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/DeepGenStruct/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper19/AnonReviewer2" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper19/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\", \"comment\": \"This work presents a method for style transfer which enables users to modify the output via adjusting different hyperparameters. The reviewers agreed that this is a well-written paper with interesting experimental results.\"}", "{\"title\": \"Review for \\\"Adjustable Real-time Style Transfer\\\"\", \"review\": \"I would like to prephase my review by saying that while I have reasonable experience in generative modeling, I know very little of style transfer beyond the basics, and this is the first paper I review in the topic, so my review cannot be more than an educated guess.\\n\\nThe paper at hand studies the problem of doing style transfer when the desired output is not just one image, but a collection of diverse images given a single style and content. Furthermore, the paper concentrates in real time generation (i.e. obtaining several images according to certain parameters without a need for retraining). This last part is where the novelty of the paper relies. To address this problem, the authors include as part of the network parameters alpha_c and alpha_s that are taken as an input for a neural network that produces as a feedforward pass the new images, and that's trained with images coming from weighted style transfer (eqs 2-3). The conditioning is done via instance normalization, very similar to [1].\\n\\nThe paper is very well written, the problem description and the algorithms (i.e. both contributions) are clear, and the method seems to 'perform well'. My main criticisms are that no quantitative evaluation or user studies have been done to assess the 'performance' of the model, and that there doesn't seem to be any fundamentally new ideas in play. Namely, it seems like a natural extension of existing methods. However, the ideas are well arranged and executed, and according to my judgement this merits enough for publication at a workshop like this one.\\n\\n[1]: https://arxiv.org/pdf/1610.07629.pdf\", \"rating\": \"3: Marginally above acceptance threshold\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}", "{\"title\": \"Interesting work, but lacking quantitative evaluation.\", \"review\": \"This paper presents an adjustable real-time style-transfer approach for generating a re-styled image given a style image and a content image, with the contribution that the stylisation of generated images could be adjusted/controlled at inference time by changing a few tuning parameters. Specifically, this is achieved by modeling as input the set of weights controlling the effect of the style and content captured by each layer of the network. The authors present various qualitative analysis to compare the proposed approach with existing works (StyleNet).\\n\\nMy major concern is the lack of quantitative experiments for evaluation. Specifically, how does the proposed approach contrast against existing models (e.g., StyleNet) in generating different stylizations with more diverse details (last paragraph, Section 2)? The authors only present one example in the experimental section, which seems to be insufficient.\\n\\nMeanwhile, this submission only studies generating images, not other modalities with highly-structured representations, which might not fit the theme of this workshop.\", \"typos\": [\"Eq (1): should $\\\\phi(\\\\bm{s})$ be $\\\\phi(\\\\bm{c})$ in the first equation?\", \"Eq (5): format issue\"], \"rating\": \"2: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}" ] }
rJeQE8LYdV
Storyboarding of Recipes: Grounded Contextual Generation
[ "Khyathi Raghavi Chandu", "Eric Nyberg", "Alan Black" ]
Information need of humans is essentially multimodal in nature, enabling maximum exploitation of situated context. We introduce a dataset for sequential procedural (how-to) text generation from images in cooking domain. The dataset consists of 16,441 cooking recipes with 160,479 photos associated with different steps. We setup a baseline motivated by the best performing model in terms of human evaluation for the Visual Story Telling (ViST) task. In addition, we introduce two models to incorporate high level structure learnt by a Finite State Machine (FSM) in neural sequential generation process by: (1) Scaffolding Structure in Decoder (SSiD) (2) Scaffolding Structure in Loss (SSiL). These models show an improvement in empirical as well as human evaluation. Our best performing model (SSiL) achieves a METEOR score of 0.31, which is an improvement of 0.6 over the baseline model. We also conducted human evaluation of the generated grounded recipes, which reveal that 61% found that our proposed (SSiL) model is better than the baseline model in terms of overall recipes, and 72.5% preferred our model in terms of coherence and structure. We also discuss analysis of the output highlighting key important NLP issues for prospective directions.
[ "Multimodal", "Generation", "Structure", "Scaffold", "Multitask" ]
Accept
https://openreview.net/pdf?id=rJeQE8LYdV
https://openreview.net/forum?id=rJeQE8LYdV
ICLR.cc/2019/Workshop/DeepGenStruct
2019
{ "note_id": [ "HJg0R7dw9N", "B1xFykfE54", "ByxxcqfnKE" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1555690453990, 1555468000615, 1554946695539 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/DeepGenStruct/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper18/AnonReviewer2" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper18/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\", \"comment\": \"The paper proposes and interesting task/dataset, but the description of the model could be improved. It would be great to take this account for the camera-ready.\"}", "{\"title\": \"Interesting task and dataset. Good contribution.\", \"review\": \"This paper studies the problem of generating textual descriptions from sequences of images (i.e., a storyboard), by introducing a new dataset and two generation models. The dataset is in the cooking domain, contains about 16k recipes from the how-to blogs, each containing about 7-13 steps on average. The generated descriptions are evaluated with both automatic metrics (Bleu, Meteor, and Rouge-L), and human judgments.\\n\\nThe contribution of this paper is solid, given that the authors will release the datasets. I think this is a very interesting task and will inspire the development of more structure-aware generation models. My main complaint is that the descriptions of the two proposed models (SSiD and SSIL) is relatively vague. \\n\\nOther questions/ comments:\\nIn Section 4.2, the sequence of phases/states are denoted with sequences of length 4 (e.g. r = <p1, p2, p3, p4>. This is slightly confusing, since there can be many more phases/states, as shown in Table 2.\\n\\nIn the first paragraph of 4.2, the authors mentioned another source of supervision with unimodal textual recipes. How many unimodal recipes are used? Would it be possible to provide more details about them?\", \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Useful new dataset and interesting model, but model not described in enough detail\", \"review\": \"This paper provides:\\n(1) A new dataset of step-by-step recipes, where each recipe has both an image and a corresponding how-to text. The proposed task is to map from the images to the text.\\n(2) A description of a model that adds structure to the visual story telling baseline (global-local attention cascading network). They add structure in two ways: (a) SSiD: They cluster sentence representations of recipe sentences into clusters that are called \\\"phases\\\" - thus a recipe can be represented as a sequence of \\\"phases\\\". Then they learn a weighted FSA which gives the probabilities of transitioning from one phase to another (I think). This is somehow incorporated into the decoder, but I'm not sure how. (b) SSiL: They introduce a loss which says that the structure (i.e. phase sequence) of the images and the generated text must be the same. They show that (a) and (b) improve over the baseline on BLEU, METEOR, ROUGE-L, and human evaluation.\", \"pros\": [\"The dataset seems really useful for researchers working on structured multimodal image-to-text generation.\", \"The overall idea of the SSiD and SSiL models seems like a good idea, and it shows convincing performance improvements.\", \"There are some really nice visualizations of the phases provided on the dataset website: https://storyboarding.github.io/story-boarding/40topics.html\"], \"cons\": [\"The SSiD model is not described in nearly enough detail for the reader to understand it properly. See more detailed notes below.\", \"Reproducibility: Unless I missed something, the authors do not mention whether they will release the code (and the generated output) for their SSiD and SSiL models. Especially given that the models are not explained in enough detail, this severely limits the reproducibility and usefulness of the work. I doubt that any researcher would be able to reproduce the models given only the information in the paper.\"], \"comments_about_detail_of_ssid_model\": [\"There are no equations in section 4.2, thus there is no precise description of the many steps required to build and train the model.\", \"Though I could mostly understand the first two paragraphs of section 4.2 (though I would not be able to reproduce the results due to the many missing details), I was not able to understand the overall idea of the FSM. For example, what are the states of the FSM? Are the states the phases?\", \"The FSM is motivated as \\\"softer\\\" but I don't understand how or why it is softer or why softness is a good thing.\", \"Could you give a brief explanation of \\\"ergodic\\\" when you use it - this would help reader comprehension.\", \"\\\"These state transition probabilities are concatenated in the decoder.\\\" - can you give more detail what you mean here, with an equation?\", \"Part of the problem is that the baseline model (Glocal) is only very superficially described. Though the reader can of course read the Glocal paper for the details, it would be more accessible if you gave a more detailed description, as the Glocal model is not necessarily well-known or standard. More importantly, this paper should introduce notation in section 4.1 so that section 4.2 can refer to it - to make precise how exactly SSiD works.\", \"In Algorithm 1, I don't know what many of the lines mean. e.g. \\\"Apply FSM to dataset\\\", \\\"For each state find entropy and reverse sort\\\". Perhaps this is because I didn't understand earlier parts.\"], \"note\": \"You should reference \\\"Simulating Action Dynamics with Neural Process Networks\\\" https://arxiv.org/abs/1711.05313\\n\\nIn conclusion, this dataset looks very useful, the model sounds interesting and seems high-performing, but the model is so insufficiently described that it will frustrate researchers who wish to build on these results. Once the SSiD model is described properly (and the code released), this will be a great paper.\", \"rating\": \"2: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}" ] }
BJlQEILY_N
Generating Molecules via Chemical Reactions
[ "John Bradshaw", "Matt J. Kusner", "Brooks Paige", "Marwin H. S. Segler", "José Miguel Hernández-Lobato" ]
Over the last few years exciting work in deep generative models has produced models able to suggest new organic molecules by generating strings, trees, and graphs representing their structure. While such models are able to generate molecules with desirable properties, their utility in practice is limited due to the difficulty in knowing how to synthesize these molecules. We therefore propose a new molecule generation model, mirroring a more realistic real-world process, where reactants are selected and combined to form more complex molecules. More specifically, our generative model proposes a bag of initial reactants (selected from a pool of commercially-available molecules) and uses a reaction model to predict how they react together to generate new molecules. Modeling the entire process of constructing a molecule during generation offers a number of advantages. First, we show that such a model has the ability to generate a wide, diverse set of valid and unique molecules due to the useful inductive biases of modeling reactions. Second, modeling synthesis routes rather than final molecules offers practical advantages to chemists who are not only interested in new molecules but also suggestions on stable and safe synthetic routes. Third, we demonstrate the capabilities of our model to also solve one-step retrosynthesis problems, predicting a set of reactants that can produce a target product.
[ "Molecules", "Deep generative model of molecules", "molecule optimization", "deep generative model of graphs" ]
Accept
https://openreview.net/pdf?id=BJlQEILY_N
https://openreview.net/forum?id=BJlQEILY_N
ICLR.cc/2019/Workshop/DeepGenStruct
2019
{ "note_id": [ "BkeGm4OP5E", "Sygio3cbcE", "BkeOzvDTFE" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1555690521618, 1555307682692, 1555031824262 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/DeepGenStruct/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper17/AnonReviewer2" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper17/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\"}", "{\"title\": \"Interesting paper with thorough experiments\", \"review\": \"This paper presents Molecule Chef, a model that maps between continuous latent codes and bags of readily available reactants. When combined with existing models that predict what products will result from a reaction with the given reactants, this model can be used to propose methods to synthesize new compounds. The continuous latent space is also useful for enabling continuous search for reactions that lead to products with desired properties.\\n\\nOverall, this is a strong paper, both in terms of motivation and results:\\n- The pipeline of first sampling multisets of reactants from Molecule Chef, then using an existing model to predict products, generates 99% valid molecules (molecules known to exist) with a high degree of novelty (molecules not in the training data). This compares favorably with comparable baselines from previous work.\\n- It's possible to learn a predictor from latent code directly to some desirable property of the end product, such as QED score. Using this, it is possible to search in the latent space for reactions that lead to products with high QED score. It is shown that gradient-based search (which can only be done in this latent space, and not in the reactants space) is better than a naive random walk. This justifies the use of a continuous latent code, instead of directly generating bags of reactants.\", \"some_questions_i_had\": [\"It would be nice to know how well a model that doesn't have the latent code, and instead directly generates bags of reactants, does in terms of the generation quality metrics in Section 4.1, and the retrosynthesis experiments in 4.3.\", \"Section 4.4 was unclear--what are the \\\"undesirable features\\\", and why would we expect Molecule Chef to avoid such features?\"], \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Review\", \"review\": \"This paper proposes a molecular generative model that generates molecules via a two-step process: 1) Generate a set of reactants from the latent space; 2) Predict the product of the generated reactants using a pre-trained reaction predictor. This two-step formulation provides synthesis routes of generated molecules, allowing end users to examine the synthetic accessibility of generated compounds. This approach is analogous to virtual screening approach in medicinal chemistry.\\n\\nThe proposed approach builds on a Wassarstein autoencoder. The encoder maps a set of reactants into a continuous vector using a gated graph neural network followed by a sum pooling. The decoder is a recurrent network that decodes the reactant molecules one by one. The output of the decoder is restricted to be a fixed set of reactant molecules, and the decoding process is modeled as generating a sequence of tokens (reactant ID).\\n\\nThis paper is well motivated. The reviewer agrees that the molecular recipe problem is important: the model should provide hints (e.g. one-step synthesis routes) of how generated molecules can be synthesized, so that chemists can synthesize the suggested compounds for experimental validation.\\n\\nMy concern of the proposed model is that it can only generate molecules that can be synthesized through a one-step reaction from a fixed reactant vocabulary (3180 reactants). This may imply that the proposed model can only cover a limited subset of chemical space. Another concern is lack of quantitive evaluation: For the local optimization experiment, the paper didn't compare with any previous approach (e.g., CVAE, GVAE). For retrosynthesis experiment, there is no quantitive evaluation at all. One suggestion is to conduct human evaluation: how often do the chemists think the suggested retrosynthesis plans make sense.\\n\\nNonetheless, the proposed method is interesting combination of generative modeling and chemical reaction prediction. The reviewer therefore votes for the acceptance of the paper.\", \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
rklf4IUtOE
A Learned Representation for Scalable Vector Graphics
[ "Raphael Gontijo Lopes", "David Ha", "Douglas Eck", "Jonathon Shlens" ]
Dramatic advances in generative models have resulted in near photographic quality for artificially rendered faces, animals and other objects in the natural world.In spite of such advances, a higher level understanding of vision and imagery does not arise from exhaustively modeling an object, but instead identifying higher-level attributes that best summarize the aspects of an object. In this work we attempt to model the drawing process of fonts by building sequential generative models of vector graphics. This model has the benefit of providing a scale-invariant representation for imagery whose latent representation may be systematically manipulated and exploited to perform style propagation. We demonstrate these results on a large dataset of fonts and highlight how such a model captures the statistical dependencies and richness of this dataset. We envision that our model can find use as a tool for designers to facilitate font design.
[ "Representation Learning", "Image Synthesis", "Computer Vision", "Applications", "Deep Learning", "Scalable Vector Graphics", "Font Generation" ]
Accept
https://openreview.net/pdf?id=rklf4IUtOE
https://openreview.net/forum?id=rklf4IUtOE
ICLR.cc/2019/Workshop/DeepGenStruct
2019
{ "note_id": [ "SJeJSNuP9E", "B1xwnpPRFE", "r1lycHDqYV" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1555690550741, 1555099054616, 1554834822545 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/DeepGenStruct/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper16/AnonReviewer2" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper16/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\", \"comment\": \"Accepted\"}", "{\"title\": \"Good application of generative models, well written\", \"review\": \"This paper proposes deep generative model consisting of VAE + autoregressive LSTM decoder for learning to generate fonts in SVG (scalable vector graphics format)\\n\\nThe paper is well written, well motivated. Perhaps it would be great to see some plot of different hyperparameters and different modeling choices and its effect on results. Additionally, it would be great if authors could elaborate on details of cost function that they used (I would imagine that it is a mix of classifier that predicts actions like moveTo/lineTo etc and well as regression module that predicts coordinates). \\n\\nOverall it looks like a well executed paper.\\nI recommend accept\", \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Good idea for generating character images\", \"review\": \"This paper provided an interesting idea for generating character (letters and digits) images of various fonts. The proposed model consists of a VAE (as a class-conditioned encoder) and a SVG decoder (with LSTM and MDN layers) and can generate reasonable images for digits and letters.\", \"pros\": \"1. The generated characters are good. We can easily recognize the generated characters. And they are of various fonts.\\n\\n2. Since the encoder is class-conditioned, we can generate other characters of the same font given one character.\\n\\n3. The learned latent embeddings are meaningful.\\n\\nCons & questions:\\n\\n1. The description of the model is not very clear. It is better to add more details.\\n\\n2. Where are the references?\", \"rating\": \"3: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}" ] }
SJlf488Y_4
Structured Prediction using cGANs with Fusion Discriminator
[ "Faisal Mahmood", "Wenhao Xu", "Nicholas J. Durr", "Jeremiah W. Johnson", "Alan Yuille" ]
We propose the fusion discriminator, a single unified framework for incorporating conditional information into a generative adversarial network (GAN) for a variety of distinct structured prediction tasks, including image synthesis, semantic segmentation, and depth estimation. Much like commonly used convolutional neural network - conditional Markov random field (CNN-CRF) models, the proposed method is able to enforce higher-order consistency in the model, but without being limited to a very specific class of potentials. The method is conceptually simple and flexible, and our experimental results demonstrate improvement on several diverse structured prediction tasks.
[ "Generative Adversarial Network", "GAN", "cGAN", "Structured Prediction", "Fusion." ]
Accept
https://openreview.net/pdf?id=SJlf488Y_4
https://openreview.net/forum?id=SJlf488Y_4
ICLR.cc/2019/Workshop/DeepGenStruct
2019
{ "note_id": [ "SJew-xFv9E", "SyghrwEN94", "Bkx5TMCf54" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1555693567134, 1555478339867, 1555387074073 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/DeepGenStruct/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper15/AnonReviewer2" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper15/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\", \"comment\": \"This paper presents a fusion discriminator which conditions upon extra information through fusion at multiple layers of the discriminator. The paper is well-executed and the experiments are convincing.\"}", "{\"title\": \"fusion discriminator for conditional GAN\", \"review\": \"This paper presents a fusion discriminator used in conditional image generation based on GANs. Specifically, instead of taking the concatenated representations of the condition image (input) and the generated image (output), the discriminator encodes the two image by \\\"fusion\\\", i.e. intermediate representations from the condition image encoder and the generated image encoder are combined (e.g. summed).\\n\\nThe paper is well-written. The approach is reasonable and the results look good. \\u0010The main concern is that the contribution is very focused: the idea of fusion is not new and applying it to cGAN seems to be a small contribution.\", \"rating\": \"3: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"discriminator fuses input/generator output at featuremap levels\", \"review\": \"In this paper, the problem of conditional image generation with conditional GANs is considered. The discriminator needs to take in both the input being conditioned on, as well as the generator output. Traditionally, the input and generator output are concatenated before passing into the discriminator network, while in this work, the input/generator output get passed through CNNs (with identical parameters) separately, and the features get merged at various levels. The authors argued that 1) theoretically the signal of fusing afterward is stronger than passing beforehand under certain conditions, and 2) empirically this discriminator architecture works better by a large margin on image synthesis, semantic segmentation and depth estimation.\", \"pros\": \"1. This paper is very clear and easy to follow.\\n2. The results show that the proposal gets better performance than concatenation by a large margin.\", \"cons\": \"1. As pointed out by this paper, a stronger signal does not necessarily mean better performance, hence I think the theoretical results do not make much sense here. Although the results seem promising, I am not sure if these results generalize to other tasks of varying dataset sizes and model architectures.\\n2. The novelty of this approach seems to be limited, and it's not clear how this approach can generalize to tasks other than image-to-image.\", \"rating\": \"2: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}" ] }
BygbVL8KO4
Unsupervised Demixing of Structured Signals from Their Superposition Using GANs
[ "Mohammadreza Soltani", "Swayambhoo Jain", "Abhinav Sambasivan" ]
Recently, Generative Adversarial Networks (GANs) have emerged as a popular alternative for modeling complex high dimensional distributions. Most of the existing works implicitly assume that the clean samples from the target distribution are easily available. However, in many applications, this assumption is violated. In this paper, we consider the observation setting in which the samples from a target distribution are given by the superposition of two structured components, and leverage GANs for learning of the structure of the components. We propose a novel framework, demixing-GAN, which learns the distribution of two components at the same time. Through extensive numerical experiments, we demonstrate that the proposed framework can generate clean samples from unknown distributions, which further can be used in demixing of the unseen test images.
[ "Demixing", "Generative Models", "GAN", "Unsupervised Learning", "Structured Recovery" ]
Accept
https://openreview.net/pdf?id=BygbVL8KO4
https://openreview.net/forum?id=BygbVL8KO4
ICLR.cc/2019/Workshop/DeepGenStruct
2019
{ "note_id": [ "BJlcVNOP9V", "rkeFendXqE", "BkxyP8SZ9E" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1555690546263, 1555430384715, 1555285590768 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/DeepGenStruct/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper14/AnonReviewer2" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper14/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\", \"comment\": \"Accepted\"}", "{\"title\": \"A simple method for linear demixing of signals with GANs\", \"review\": \"The authors present a simple method for training GANs to demix images. They train two separate generator networks, take the sum of their output, and then perform inference by finding the latent vectors for both GANs which minimize the distance in pixel space to the original (mixed) image, with a regularization penalty on the magnitude of the latent vector.\\n\\nWhile the approach is very simple, the experiments looked promising. When the two data distributions were very different, the demixing GAN was able to cleanly separate out the two classes. I would have appreciated more rigorous comparison against alternative methods, especially more on ICA and possibly NMF as baselines. More comparison on real world data would be interesting as well - for instance quite a lot of biomedical data contains mixed signals from different types of tissue and automatic demixing is very valuable there.\", \"rating\": \"3: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Interesting initial investigation in the GANs for demixing.\", \"review\": \"The authors investigate the use of a structured generative model\\nto perform demixing of data in an unsupervised way.\\nThe paper is well written (clearly highlights the lack of supervision prevents training a conditional generative model, and how structure is here key for both learning the model and performing demixing at inference time), and has thorough experiments on simple datasets. \\nThe main limitation - but one which the author recognize and start to investigate - is that there are no guarantee the current structure is an inductive bias strong enough to guarantee that the recovered separated signals correspond to the desired ones. \\nI worry that for complex datasets the approach would not yield the desired results.\\nAlso - one could argue the archetypal problem for source separation is the cocktail party problem - it would have been interesting to try some audio data.\", \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}" ] }
HJeZNLIt_4
A RAD approach to deep mixture models
[ "Laurent Dinh", "Jascha Sohl-Dickstein", "Razvan Pascanu", "Hugo Larochelle" ]
Flow based models such as Real NVP are an extremely powerful approach to density estimation. However, existing flow based models are restricted to transforming continuous densities over a continuous input space into similarly continuous distributions over continuous latent variables. This makes them poorly suited for modeling and representing discrete structures in data distributions, for example class membership or discrete symmetries. To address this difficulty, we present a normalizing flow architecture which relies on domain partitioning using locally invertible functions, and possesses both real and discrete valued latent variables. This Real and Discrete (RAD) approach retains the desirable normalizing flow properties of exact sampling, exact inference, and analytically computable probabilities, while at the same time allowing simultaneous modeling of both continuous and discrete structure in a data distribution.
[ "discrete variables", "generative models", "flow based models" ]
Accept
https://openreview.net/pdf?id=HJeZNLIt_4
https://openreview.net/forum?id=HJeZNLIt_4
ICLR.cc/2019/Workshop/DeepGenStruct
2019
{ "note_id": [ "Hkgk0XOPcV", "r1lCrUIL9N", "ByluvC8iYN" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1555690439246, 1555617350203, 1554898528504 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/DeepGenStruct/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper13/AnonReviewer1" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper13/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\", \"comment\": \"Accepted\"}", "{\"title\": \"A nice paper that uses mixture of flows to capture discrete structure in the data, but lacks convincing experiments on real datasets\", \"review\": \"General:\\n\\nThis paper proposes a new framework to combine mixture models and invertible projections to model discrete structures in the data. This new method adds flexibility to existing flow-based generative models, especially in the case of modeling distributions with multiple modalities, holes, or clear clustering structures. I think this is a nice paper which tries to relax the limitation of existing flow methods -- it is usually difficult for them to map between two manifolds that have holes, are non-smooth, or have distinct modality patterns.\", \"pros\": [\"Nice idea and clear formulation. The method relaxes the constraint present in existing flow-based models that the entire projection needs to be invertible, instead the projection is only piecewise invertible -- but they are still fully invertible if the membership variable (auxiliary variable) k is given. Through introducing this auxiliary discrete variable, each partition in the data would correspond to a separate locally invertible function. I think this way would greatly improve the flexibility of flow models, relaxing the requirement for high-varying Jacobian term to shape the manifold in some cases (which is often hard to achieve). Also, this method preserves the merits of existing flow-based models (e.g. exact learning/inference)\", \"This paper includes a series of very clear visualizations to explain the methods and compare with the RealNVP model. The latent space visualizations at different coupling layers provide insights on how flows transform the space (or how one density space flows to another) step by step. I really enjoy reading these figures.\", \"The proposed method outperforms RealNVP in terms of log likelihood on several toy datasets.\"], \"cons\": [\"It is unknown how this approach would work in real settings. This paper only carried out the experiments on very simple data with limited capacities. There might be some issues when dealing with real dataset. I am a bit concerned about the applicability of this approach to practical problems, I think more thorough experiments need to done to prove its performance.\", \"How to pick the number of modes K ? Is this a hyperparameter ? If so, the results might be sensitive to it in real settings.\", \"In Appendix A the authors provide a piecewise linear formulation of f_z(x), is this general enough ? It is unclear to me how to design such functions when modeling real-world image or speech data.\"], \"rating\": \"3: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"A normalizing flow that uses a partition of the output space into non-overlapping regions and a separate invertible mapping on each region. Promising idea, but I have some practical concerns\", \"review\": \"This paper proposes an invertible transformation, similarly to a normalizing flow, that partitions the domain of a continuous density into non-overlapping regions, and uses a piece-wise invertible mapping for each of these regions.\\n\\nFrom the abstract, it may seem that the authors will tackle the case where x is a discrete variable; however the transformation is for distributions continuous support; I suggest to rewrite the abstract to avoid that misunderstanding.\\n\\nThe idea of the paper seems promising. My main concerns are of practical nature. In particular, the method introduces many parameters (the partitioning of the regions, the piece-wise mappings, etc.) that seem hard to tune in practice. Can the authors discuss how (and give the rationale why) to set all of these components their specific choices?\\n\\nAlso, how does the proposed approach scale with dimensionality? It is hard to partition a high-dimensional space into meaningful regions.\", \"notation\": \"What is K in the paper? It must be a set, since |K| is the number of components, what K isn't defined anywhere in the paper.\", \"typo\": \"patternm => pattern\", \"rating\": \"3: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}" ] }
r1gxEILFu4
Correlated Variational Auto-Encoders
[ "Da Tang", "Dawen Liang", "Tony Jebara", "Nicholas Ruozzi" ]
Variational Auto-Encoders (VAEs) are capable of learning latent representations for high dimensional data. However, due to the i.i.d. assumption, VAEs only optimize the singleton variational distributions and fail to account for the correlations between data points, which might be crucial for learning latent representations from dataset where a priori we know correlations exist. We propose Correlated Variational Auto-Encoders (CVAEs) that can take the correlation structure into consideration when learning latent representations with VAEs. CVAEs apply a prior based on the correlation structure. To address the intractability introduced by the correlated prior, we develop an approximation by average of a set of tractable lower bounds over all maximal acyclic subgraphs of the undirected correlation graph. Experimental results on matching and link prediction on public benchmark rating datasets and spectral clustering on a synthetic dataset show the effectiveness of the proposed method over baseline algorithms.
[ "variational", "latent representations", "vaes", "correlations", "correlation structure", "capable", "high dimensional data", "due", "assumption", "singleton variational distributions" ]
Accept
https://openreview.net/pdf?id=r1gxEILFu4
https://openreview.net/forum?id=r1gxEILFu4
ICLR.cc/2019/Workshop/DeepGenStruct
2019
{ "note_id": [ "BJxiiXuv9E", "S1lulXBZ5N", "SkeYRMmW54" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1555690402710, 1555284719914, 1555276496623 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/DeepGenStruct/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper12/AnonReviewer1" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper12/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\"}", "{\"title\": \"Interesting and thorough work.\", \"review\": \"In this paper, the authors investigate generalizing VAEs to problems where correlations can be found between data points. They leverage known results (namely, writing the distribution on tree-shaped graphical models as a function of pairwise joint distributions and marginals; for general graphs, they use mixture of tree distributions in a fashion reminiscent of tree-reweighted belief propagation) from graphical models to derive tractable evidence lower bound for the problem of interest.\\nThey report improved results on a variety of datasets (spectral clustering, collaborative filtering and link prediction) compared to vanilla VAEs and a graph net based approach.\\n\\nThis is a good paper; the ideas are original, technically interesting, and well presented (there are some issues with language but nothing distracting), and results are convincing.\\n\\nThere was a slight missed opportunity in explaining in further details how the new bound interacted with sampling from the posterior (this is currently hidden in appendix D: the authors leverage properties of gaussian distributions; in general it would be more complicated to sample from the joint knowing only pairwise/marginals).\", \"rating\": \"5: Top 15% of accepted papers, strong accept\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Decent paper on extending the VAE framework with derived correlated prior from graph structures\", \"review\": \"The paper is based on the idea of extending the VAE framework beyond the simplistic IID assumption that it makes on the data. They do so by using structured priors that explicitly take the correlations between data points into account. The methods can be used to learn meaningful representation directly from a weighted graph of the data as shown in the experiments section. The only downside is that the paper does not compare or provide any detail of how their method relates to hierarchical Bayesian models such as LDA that also capture global level correlational statistics in categorical data.\", \"minor\": \"Sentences repeated twice at the end of page 2.\", \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}" ] }
rylgEULtdN
FVD: A new Metric for Video Generation
[ "Thomas Unterthiner", "Sjoerd van Steenkiste", "Karol Kurach", "Raphaël Marinier", "Marcin Michalski", "Sylvain Gelly" ]
Recent advances in deep generative models have lead to remarkable progress in synthesizing high quality images. Following their successful application in image processing and representation learning, an important next step is to consider videos. Learning generative models of video is a much harder task, requiring a model to capture the temporal dynamics of a scene, in addition to the visual presentation of objects. While recent generative models of video have had some success, current progress is hampered by the lack of qualitative metrics that consider visual quality, temporal coherence, and diversity of samples. To this extent we propose Fréchet Video Distance (FVD), a new metric for generative models of video based on FID. We contribute a large-scale human study, which confirms that FVD correlates well with qualitative human judgment of generated videos.
[ "Metric", "Evaluation", "Video Generation", "Generative Models" ]
Accept
https://openreview.net/pdf?id=rylgEULtdN
https://openreview.net/forum?id=rylgEULtdN
ICLR.cc/2019/Workshop/DeepGenStruct
2019
{ "note_id": [ "HklSNVuPqE", "HJlR4udf9E", "SJ9vIX-54" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1555690541341, 1555363893847, 1555277409515 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/DeepGenStruct/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper11/AnonReviewer2" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper11/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\", \"comment\": \"Accepted\"}", "{\"title\": \"A new metric to evaluate generative models of video\", \"review\": \"The paper presents a new metric (FVD) for generative models of video. It basically builds on Frechet Inception Distance (FID) which has been proposed for Images and extends it to sequential data such as videos. It captures both the temporal coherence of the content of the video and the quality of each frame. Authors present a thorough evaluation of this proposed metric and show that it correlates with the qualitative human judgements of generated video.\\n\\nI am not an expert in this area. Still, I felt that the paper could have been presented in a better way. The new metric FVD is not well motivated. The paper is difficult to read, it is written as a summary, and most of experimental setup and evaluated systems are moved to appendices. Given that the main paper is presented in only 3 pages (with appendices, it is only 8 pages), I didn't understand the need of appendix sections. First three pages were not inclusive. If accepted, I would recommend to make the paper inclusive of all the details.\", \"rating\": \"3: Marginally above acceptance threshold\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}", "{\"title\": \"metric for evaluating video generation\", \"review\": \"The paper extends the FID metric used for evaluating generative sample quality to video generation. They use I3D network that generalises the Inception architecture to sequential data. The paper provides extensive comparison with other baseline method and does a thorough human evaluation to show that their proposed metric has significant consensus with human evaluation.\", \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}" ] }
SkgkEL8FdV
DIVA: Domain Invariant Variational Autoencoder
[ "Maximilian Ilse", "Jakub M. Tomczak", "Christos Louizos", "Max Welling" ]
We consider the problem of domain generalization, namely, how to learn representations given data from a set of domains that generalize to data from a previously unseen domain. We propose the domain invariant VAE (DIVA), a generative model that tackles this problem by learning three independent latent subspaces, one for the class, one for the domain and one for the object itself. In addition, we highlight that due to the generative nature of our model we can also incorporate unlabeled data from known or previously unseen domains. This property is highly desirable in fields like medical imaging where labeled data is scarce. We experimentally evaluate our model on the rotated MNIST benchmark where we show that (i) the learned subspaces are indeed complementary to each other, (ii) we improve upon recent works on this task and (iii) incorporating unlabelled data can boost the performance even further.
[ "generative modeling", "domain generalization" ]
Accept
https://openreview.net/pdf?id=SkgkEL8FdV
https://openreview.net/forum?id=SkgkEL8FdV
ICLR.cc/2019/Workshop/DeepGenStruct
2019
{ "note_id": [ "B1gQTXdwqV", "BJlmMxzQ54", "r1lJBVcRF4" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1555690427267, 1555402763466, 1555108919330 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/DeepGenStruct/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper10/AnonReviewer2" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper10/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\", \"comment\": \"Accepted\"}", "{\"title\": \"need more experiments\", \"review\": \"The paper considers the problem of domain generalization: how to learn representations given data from a set of domains that generalize to data from a previously unseen domain.\\n\\nThe paper needs to better define \\\"domain invariant\\\". The rotated MNIST dataset is used for evaluation. But I do not think rotated images are from different domains. The paper needs more convincible experiments to prove the effectiveness of the proposed methods.\", \"rating\": \"3: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Intuitive model design, but the experiments are kind of boring\", \"review\": \"The main contribution of this paper is the proposed Domain Invariant VAE (DIVA), which decomposes the latent space of a VAE into 3 parts: one for the class, one for the domain, and one for the rest. Additional auxiliary classifiers are introduced to encourage the separation of domain and class specific latent codes. This framework has also been extended to semi-supervised learning.\", \"pros\": \"The studied problem is interesting, and the model itself is also intuitive. The overall framework looks elegant.\", \"cons\": \"(i) There is nothing special that surprises me in the model. It follows standard VAE design and standard extension to semi-supervised learning for VAE. It naturally extends the original VAE to incorporate domain-specific latent code inside. So I feel the whole model design is kind of boring.\\n\\n(ii) Only experiments on MNIST are considered, so the experiments are relatively less interesting. As also noted by the authors, more interesting experiments on more challenging datasets are desired.\", \"rating\": \"3: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}" ] }
rygJV8UKuV
Variational autoencoders trained with q-deformed lower bounds
[ "Septimia Sârbu", "Luigi Malagò" ]
Variational autoencoders (VAEs) have been successful at learning a low-dimensional manifold from high-dimensional data with complex dependencies. At their core, they consist of a powerful Bayesian probabilistic inference model, to capture the salient features of the data. In training, they exploit the power of variational inference, by optimizing a lower bound on the model evidence. The latent representation and the performance of VAEs are heavily influenced by the type of bound used as a cost function. Significant research work has been carried out into the development of tighter bounds than the original ELBO, to more accurately approximate the true log-likelihood. By leveraging the q-deformed logarithm in the traditional lower bounds, ELBO and IWAE, and the upper bound CUBO, we bring contributions to this direction of research. In this proof-of-concept study, we explore different ways of creating these q-deformed bounds that are tighter than the classical ones and we show improvements in the performance of such VAEs on the binarized MNIST dataset.
[ "variational autoencoders", "ELBO", "IWAE", "q-deformed logarithm", "tight bounds" ]
Accept
https://openreview.net/pdf?id=rygJV8UKuV
https://openreview.net/forum?id=rygJV8UKuV
ICLR.cc/2019/Workshop/DeepGenStruct
2019
{ "note_id": [ "Hygi3WKP9V", "Byl906NbqE", "HyxRBRMZ94" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1555694002638, 1555283410448, 1555275334128 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/DeepGenStruct/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper9/AnonReviewer1" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper9/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\", \"comment\": \"This paper has interesting contributions. However both reviewers agree it would be valuable to add an analysis as to whether the objective is a bound on the log marginal or not.\"}", "{\"title\": \"Missing a few elements\", \"review\": \"In the paper, the authors suggest modifying various VAE bounds (ELBO, IWAE) by replacing logarithm by q-logarithms. The paper is missing a few steps to be satisfying in my opinion - at its heart, it takes a lower bound to the true data evidence, and upper bounds it using a q-logarithm with q<1.0. While the new estimate would be 'tighter' it if it was still a lower bound, the authors provide no guarantee that the resulting qELBO are still in fact lower bounds; maximizing them may therefore not make sense.\\nThe q values used are very close to 1, which suggests the q-logarithm is used a mild hyperparameter to smooth the objective function; gains on experimental data are minor.\", \"rating\": \"2: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Rigor and clarity lacking, but a reasonable contribution\", \"review\": \"This paper proposes a new lower bound for variational inference based on q-deformed logarithms, a generalization of the logarithm that augments it with a q parameter that controls its concavity. The authors train VAEs with this bound and report performance improvements over the ELBO, but not the IWAE bound.\\n\\nThe paper has issues with clarity, and rigor, but is overall a reasonable contribution.\", \"specific_feedback\": \"1. The derivation of the q-deformed lower bounds is lacking in rigor. The new bounds are just stated by swapping the q-logarithm for the standard logarithm without discussing whether that is possible. A proof that the new bound is a valid lower bound would be useful.\\n\\n2. Similarly, it is not clear if swapping in the q-logarithm gives a lower bound on the log likelihood of the data (q=1) or the q-deformed log likelihood of the data for a specific value of q. The latter seems more likely. If that is the case, a discussion of the benefits and drawbacks of optimizing a lower bound on the q-deformed log likelihood of the data would be helpful to the reader.\\n\\n3. In the actual training procedure the authors optimize bounds with different values of q for each batch. An argument should be made that this procedure is still optimizing a valid lower bound on the (possibly q-deformed) log likelihood.\\n\\n4. The optimization procedure for q is not clearly stated. It seems like q* is set to make the qELBO evaluated with q=q^* match qELBO* as closely as possible. So perhaps the optimization procedure attempts to minimize (qELBO* - qELBO(q=q*))^2 w.r.t. q*. This should be stated clearly, and the optimization procedure should be clearly motivated.\\n\\n5. In evaluation, what method is used to estimate log \\\\hat{p}_x?\", \"rating\": \"3: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
S1xA7ILFOV
On Scalable and Efficient Computation of Large Scale Optimal Transport
[ "Yujia Xie", "Minshuo Chen", "Haoming Jiang", "Tuo Zhao", "Hongyuan Zha" ]
Optimal Transport (OT) naturally arises in many machine learning applications, where we need to handle cross-modality data from multiple sources. Yet the heavy computational burden limits its wide-spread uses. To address the scalability issue, we propose an implicit generative learning-based framework called SPOT (Scalable Push-forward of Optimal Transport). Specifically, we approximate the optimal transport plan by a pushforward of a reference distribution, and cast the optimal transport problem into a minimax problem. We then can solve OT problems efficiently using primal dual stochastic gradient-type algorithms. We also show that we can recover the density of the optimal transport plan using neural ordinary differential equations. Numerical experiments on both synthetic and real datasets illustrate that SPOT is robust and has favorable convergence behavior. SPOT also allows us to efficiently sample from the optimal transport plan, which benefits downstream applications such as domain adaptation.
[ "Scalable optimal transport", "generative model", "neural ODE" ]
Accept
https://openreview.net/pdf?id=S1xA7ILFOV
https://openreview.net/forum?id=S1xA7ILFOV
ICLR.cc/2019/Workshop/DeepGenStruct
2019
{ "note_id": [ "B1gb4VOPq4", "ByeQl_FmqN", "rygv3L3f5E" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1555690536937, 1555433451388, 1555379886995 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/DeepGenStruct/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper7/AnonReviewer2" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper7/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\", \"comment\": \"Accepted\"}", "{\"title\": \"A clever combination of many ideas in optimal transport and neural density estimation, with surprisingly rigorous experiments\", \"review\": \"This was submitted as a workshop paper but I think with just a little more detail it would be a strong contender for a regular conference paper. The authors show how a combination of primal-dual estimation for Lagrangian problems with neural ODEs for density estimation can be used to solve the regularized optimal transport problem on high dimensional spaces. They show with numerous experiments on challenging domain-to-domain problems that the joint probability distribution they learn can be used for meaningful joint generation tasks, such as pix2pix-like style transfer in the image domain. Overall this paper was a pleasure to read - clearly motivated, tackling an important problem and using state-of-the-art methods to achieve it. I would have appreciated a little more discussion of the stability of gradient ascent/descent for solving the Lagrangian multiplier formulation of their objective, as I have found these kinds of problems very hard to work with in a stochastic domain, but overall the paper was compelling and timely.\", \"rating\": \"5: Top 15% of accepted papers, strong accept\", \"confidence\": \"3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Unclear if using neural nets to approximate lagrangians is a good idea.\", \"review\": [\"1. The paper makes a variety of unsupported claims to justify their approach. Mainly this has to do with assuming that reducing the problem to a neural-net parametrized minimax game is inherently a good idea.\", \"The paper assumes that GAN-stype minimax problems are easier to solve than having neural nets parametrize the dual. This is a claim that requires substantial elaboration, since it's widely acknowledged that GANs are hard to train, and one would rather have a non-minimax optimization objective than a minimax one.\", \"The paper also claims that because neural nets have good inductive biases for tasks such as object recognition, they would serve as good Lagrange multipliers. I'm not sure where this claim comes from. Lagrange multipliers simply need to diverge to indicate constraint violations. These claims about statistical learnability dont seem relevant to the problem.\", \"'direct estimation of probability distribution is not always convenient' seems backwards. Implicit generative models are a necessary evil when you're trying to generate from certain types of neural networks, not that they're more convenient than using MLE.\", \"Citing the universal approximation theorem for DNNs is vacuous in this setting considering that you could just solve the original nonparametric problem.. the claim should be that neural nets provide lower approximation error for the amount of required computation.\", \"2. There should be substantially more work analyzing the effect of the Lipschitz constraint on the multipliers.\", \"Currently, the Lipschitz constraint comes out of nowhere. If the Lagrange multipliers are unbounded, then you've failed at the minimax objective, and the resulting solution is undefined in the primal. You have an optimal transport map that fails to satisfy the marginal constraints, so it does you no good to add arbitrary restrictions to the Lagrangian.\", \"If you want to continue to go down this road - you should really characterize what this Lipschitz constraint does to the objective. Equation (9) is a good start, but its not terribly clear what happens here, because you now have two metrics: one implied by the cost (c) and another one from the Lipschitz constraint. I think you'd be in better shape if you just reformulate the entire paper in terms of W_1 with a fixed distance metric and define the Lipschitz constant there.\", \"Finally, I conjecture (I think you might be able to do this by examining the subgradient structure of W_1 near X ...) that if you're in W_1 with eta > 1, then you would actually be solving the original problem. That would be a pretty interesting result, and would actually justify much of the paper.\", \"3. Baselines comparisons seem lacking.\", \"A natural approach is just to apply a neural ODE to the pushforward formulation from Monge. Yes, this isn't always feasible, but the proposed SPOT procedure is heuristic to start with. Does this approach not work?\", \"The generation experiments don't make sense from the narrative of the paper. If you've actually solved the OT problem, the generated images are trivial (i.e. recovered from the original training set) because you match the marginals. If you're generating new images, then you've failed to solve the OT marginal constraints. Either way this seems problematic.\", \"The paper is motivated in terms of continuous OT, but the experiments all operate on discrete, empirical distributions (plus generation experiments, but see my comments above.. the fact that you generalize just means you didn't solve the original OT problem.. this is a negative, not a plus). This is not only a conceptual gap in the paper, it means that the authors should have set up appropriate comparisons to entropically regularized / subsampled OT algorithms which are much more principled and mature.\", \"4. Consider reframing the paper\", \"The paper has some interesting results, about generating paired samples, and the importance of doing such tasks and so on. However, the paper is currently motivated as approximating the underlying OT objective. From this latter motivation, the paper is very lacking - there's many conceptual holes about whether the lagrangian constraints are holding, or if this is a good idea, or if you've set up the appropriate baselines.\"], \"rating\": \"2: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}" ] }
rkepX8LFuE
Improved Adversarial Image Captioning
[ "Pierre Dognin", "Igor Melnyk", "Youssef Mroueh", "Jarret Ross", "Tom Sercu" ]
In this paper we study image captioning as a conditional GAN training, proposing both a context-aware LSTM captioner and co-attentive discriminator, which enforces semantic alignment between images and captions. We investigate the viability of two discrete GAN training methods: Self-critical Sequence Training (SCST) and Gumbel Straight-Through (ST) and demonstrate that SCST shows more stable gradient behavior and improved results over Gumbel ST.
[ "image captioning", "discrete GAN training" ]
Accept
https://openreview.net/pdf?id=rkepX8LFuE
https://openreview.net/forum?id=rkepX8LFuE
ICLR.cc/2019/Workshop/DeepGenStruct
2019
{ "note_id": [ "HylTRfYv54", "B1xgOSOWcN", "Byl9kPcRtE" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1555694293102, 1555297639670, 1555109602143 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/DeepGenStruct/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper6/AnonReviewer1" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper6/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\", \"comment\": \"As the reviewers note, it is true that both SCST and Gumbel ST have been utilized in the past for reinforcement-learning style problems. However the application to GAN-based image captioning and the thorough experiments will make a nice contribution to the workshop.\"}", "{\"title\": \"Review\", \"review\": \"This paper compares two adversarial training approach for image captioning: self-critical sequence training (SCST) and Gumbel Straight-Through method. The discriminator utilizes a co-attention pooling mechanism to compute the compatibility of the caption and image. During training, the discriminator is trained to distinguish real captions from fake, as well as to detect unrelated real sentences (randomly chosen). To backpropogate the gradient from the discriminator to generator, the author using both SCST and Gumbel ST. The experimental results show that SCST performs better in terms CIDEr and BLEU.\\n\\nThis paper presents a thorough empirical evaluation between the two methods. However, the reviewer's main criticism of this paper is lack of technical innovation. Both SCST and Gumbel ST method have been proposed in previous work and the contribution of this paper is no more than empirical comparison.\", \"rating\": \"2: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"The experiments look solid, but the overall contribution of the paper looks marginal\", \"review\": \"The main contribution of this paper is an improved GAN model for image captioning. First, a context-aware LSTM captioner is proposed, which provides some moderate modifications to the original adaptive attention paper. Second, a stronger co-attentive discriminator is introduced, which shows better performance than previous discriminator design. Third, SCST is used for this GAN training.\", \"pros\": \"The experiments are relatively well designed to understand the effect of each individual model design.\", \"cons\": \"The novelty of this paper is limited. It improves the original conditional GAN for image captioning marginally by using different generator and discriminator design, and a new training method. However, each individual module only provides marginal contributions. There is no surprise in the generator and discriminator design, and the usage of SCST for GAN training is also a direct application of previous methods.\", \"rating\": \"3: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
S1lnmLLKu4
DUAL SPACE LEARNING WITH VARIATIONAL AUTOENCODERS
[ "Hirono Okamoto", "Masahiro Suzuki", "Itto Higuchi", "Shohei Ohsawa", "Yutaka Matsuo" ]
This paper proposes a dual variational autoencoder (DualVAE), a framework for generating images corresponding to multiclass labels. Recent research on conditional generative models, such as the Conditional VAE, exhibit image transfer by changing labels. However, when the dimension of multiclass labels is large, these models cannot change images corresponding to labels, because learning multiple distributions of the corresponding class is necessary to transfer an image. This leads to the lack of training data. Therefore, instead of conditioning with labels, we condition with latent vectors that include label information. DualVAE divides one distribution of the latent space by linear decision boundaries using labels. Consequently, DualVAE can easily transfer an image by moving a latent vector toward a decision boundary and is robust to the missing values of multiclass labels. To evaluate our proposed method, we introduce a conditional inception score (CIS) for measuring how much an image changes to the target class. We evaluate the images transferred by DualVAE using the CIS in CelebA datasets and demonstrate state-of-the-art performance in a multiclass setting.
[ "variational autoencoder", "image transfer", "multi-class", "dual space" ]
Accept
https://openreview.net/pdf?id=S1lnmLLKu4
https://openreview.net/forum?id=S1lnmLLKu4
ICLR.cc/2019/Workshop/DeepGenStruct
2019
{ "note_id": [ "r1gx-EFP5V", "S1lOHaeVc4", "S1gg8aFW5V" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1555694583953, 1555463487845, 1555303751864 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/DeepGenStruct/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper5/AnonReviewer2" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper5/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\", \"comment\": \"This paper proposes a VAE for images with multiple labels and a new evaluation metric called Conditional Inception Score (CIS) that measures the influence of the target class to the image. The experiments are reasonable.\"}", "{\"title\": \"looks good. I mainly have concerns about the inference procedure.\", \"review\": \"This work describes a dual variational autoencoder for image style transfer. Different from CVAE, the conditional assumption here is that a label is dependent on the latent representation of the image. The other contribution is a metric for evaluating conditionally transferred images.\\n\\nThe paper is generally well-written and explained in most parts. Motivations are clear. The experimental results look good. \\n\\nHowever, I am not very convinced about the inference procedure of the model, as shown in Equation (3). This looks artificial, since you are feeding the linear combination of the latent and dual code (i.e., the label space) into the decoder. I see no reason why this will give good transferred image, since the decoder is trained with different input. I personally think you can have an inference network which takes the latent code and the label to generate the image. Did I miss something?\", \"rating\": \"3: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"The paper is not well-written and the proposed metric is not justified\", \"review\": \"This paper proposes a dual variational autoencoder(DualVAE) model for generating images with multi labels. Compared to the existing methods like conditional VAE, the difference is the introduction of a dual vector to the model, which is a parameter of a decision boundary. The author claims that such a change can avoid the exponential dependency of the number of class.\\n\\nThe second contribution of the paper is to propose a new metric to evaluate the transferred images corresponding to multiclass labels, i.e. Conditional Inception Score (CIS).\\n\\nEmpirical studies are conducted on the CelebA dataset. DualVAE achieves better CIS scores than three existing methods.\\n\\nThe paper is not well-written and hard to follow. Some places are not precisely stated, e.g. \\u201cexhibit image transfer by changing labels\\u201d in the abstract. The experiment is not convincing, as it only compares different methods on the proposed metric. How do you justify CIS is an appropriate metric?\", \"rating\": \"2: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}" ] }
ryg3XILFdN
Deep Generative Models for Generating Labeled Graphs
[ "Shuangfei Fan", "Bert Huang" ]
As a new way to train generative models, generative adversarial networks (GANs) have achieved considerable success in image generation, and this framework has also recently been applied to data with graph structures. We identify the drawbacks of existing deep frameworks for generating graphs, and we propose labeled-graph generative adversarial networks (LGGAN) to train deep generative models for graph-structured data with node labels. We test the approach with different discriminative models as well as different GAN frameworks on various types of graph datasets, such as collections of citation networks and protein graphs. Experiment results show that our model can generate diverse labeled graphs that match the structural characteristics of the training data and outperforms all baselines in terms of quality, generality, and scalability.
[ "deep generative models", "generative adversarial networks", "data", "graphs", "labeled graphs", "new way", "generative models", "gans", "considerable success" ]
Accept
https://openreview.net/pdf?id=ryg3XILFdN
https://openreview.net/forum?id=ryg3XILFdN
ICLR.cc/2019/Workshop/DeepGenStruct
2019
{ "note_id": [ "rJx9i7dP9E", "BkldhOiScN", "rJxIe2_Z5E" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1555690401584, 1555572912343, 1555299310100 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/DeepGenStruct/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper4/AnonReviewer2" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper4/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\", \"comment\": \"Accepted\"}", "{\"title\": \"GAN-based model for labeled graph generation\", \"review\": \"This paper proposes a GAN-based model, i.e., LGGAN to generate labeled graphs. The generator of LGGAN is a multi-layer perceptron, which samples from a standard normal distribution to generate the label matrix L and the adjacent matrix A. On the other hand, the discriminator takes a graph sample as input and outputs a scalar.\\n\\nEmpirical studies are conducted for generating citation and protein graphs. Under the maximum mean discrepancy (MMD) metric, LGGAN outperforms existing methods like MMSB and DeepGMG on both citation and protein graph generation. LGGAN also demonstrates advantages over MMSB in terms of the graph statistics, which is used to measure dissimilarity between the generated graph and the training graph.\\n\\nOverall, the paper is well-written, the methodology makes sense to me, and the experimental results look good too. So I will vote for acceptance.\", \"rating\": \"3: Marginally above acceptance threshold\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}", "{\"title\": \"Review\", \"review\": \"This paper introduces a new GAN model (LGGAN) for labeled graph generation. The output of LGGAN generator contains two parts: labels of each node (represented by one-hot vectors) and the adjacency matrix. The discriminator is a graph convolutional network that outputs graph-level scalar probability of the input being real data. Specifically, the author uses JK-Net to parameterize the discriminator. Empirical results show that LGGAN outperforms state-of-the-art baselines such as DeepGMG and GraphRNN in terms of the MMD evaluation metrics.\\n\\nThe proposed LGGAN model is very similar to MolGAN. In both model, the generator outputs the adjacency matrix and node label, and the discriminator is parameterized as GCN. The major difference is in the architectural choices of discriminator (JK-Net v.s. Relational GCN). Unfortunately the paper does not compare with MolGAN. It should be very easy to adapt MolGAN model for the datasets used in this paper. The reviewer is also concerned why LGGAN is not compared on molecule tasks. The \\\"specialized evaluation method\\\" has been established by previous work (e.g., MolGAN) and it's not hard to run at all.\\n\\nMoreover, I have several important questions that needs to be clarified:\\n1) Since node labels and adjacency matrix are discrete values, how did the LGGAN propagates the gradient from the discriminator to the generator?\\n2) The author mentioned in Section 3.1 that the node attributes are not included in the discriminator. If so, the discriminator only focus on the adjacency matrix and the node labels would be mostly random. How would LGGAN learns the distribution of labeled graphs like Protein?\", \"rating\": \"3: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
SkgsQ8LK_E
Disentangling Content and Style via Unsupervised Geometry Distillation
[ "Wayne Wu", "Kaidi Cao", "Cheng Li", "Chen Qian", "Chen Change Loy" ]
It is challenging to disentangle an object into two orthogonal spaces of content and style since each can influence the visual observation in a different and unpredictable way. It is rare for one to have access to a large number of data to help separate the influences. In this paper, we present a novel framework to learn this disentangled representation in a completely unsupervised manner. We address this problem in a two-branch Autoencoder framework. For the structural content branch, we project the latent factor into a soft structured point tensor and constrain it with losses derived from prior knowledge. This encourages the branch to distill geometry information. Another branch learns the complementary style information. The two branches form an effective framework that can disentangle object's content-style representation without any human annotation. We evaluate our approach on four image datasets, on which we demonstrate the superior disentanglement and visual analogy quality both in synthesized and real-world data. We are able to generate photo-realistic images with 256x256 resolution that are clearly disentangled in content and style.
[ "generative models", "unsupervised learning" ]
Accept
https://openreview.net/pdf?id=SkgsQ8LK_E
https://openreview.net/forum?id=SkgsQ8LK_E
ICLR.cc/2019/Workshop/DeepGenStruct
2019
{ "note_id": [ "ryeIUBYD94", "SyghDeIM94", "r1xb8L_b5E" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1555694925851, 1555353700183, 1555297864536 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/DeepGenStruct/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper3/AnonReviewer2" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper3/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\"}", "{\"title\": \"interesting model, sloppy presentation with many information gaps\", \"review\": \"This paper proposes an unsupervised method for disentangling the content from\\nthe style of images. I proposes a 2-branch VAE. The structure branch is \\nexplicitly designed to capture geometric properties, and the style branch should \\ncapture 'everything else' as style. Extensive evaluation shows that the model \\noutperforms competitors qualitatively and quantitatively on real and artificial \\ndata sets.\\n\\nThe paper proposes an interesting model and presents an impressive diversity\\nof evaluation. Unfortunately, the writing is often unclear (and the paper has a\\nlarge number of grammar errors) which makes the model description, and especially\\nthe evaluation, very hard to follow.\\n\\nPros\\n------\\n- interesting and intuitive prior which drives content capturing\\n- exhaustive evaluation; good effort to capture results quantitatively\\n- ablation study showing benefit of different model components\\n\\nCons\\n------\\n- quality of writing; many grammar mistakes (specifically 2.3 and 2.4); \\nincomplete sentences (e.g., beginning of 2.3; last paragraph on page 2)\\n- many details left out (e.g., in model description and evaluation), model description\\n is hard to follow at places\\n- evaluation insufficiently described; many tasks are not comprehensible form \\nthe provided information.\\n\\nDetailed Comments\\n------------------\\n- a motivation of why the task of content and style disentanglement is useful \\n(or interesting) would improve the paper. Relatedly, Figure 1 is very much left \\nout of context until the experiment section; it could be used to provide the \\nmotivation, and should be explained in more detail in the intro, or moved to a \\nlater spot.\\n- The prior loss description is very dense. The hour glass network and the \\nlandmark mapping process should be explained in detail. It does not become clear \\nif this is standard methodology or a contribution of this work. \\n- an Explanation of hourglass networks should be added. \\n- the description of the KL Loss penalty formulation (second part of 2.3) is \\nvery dense and hard to follow\\n- many details for the evaluation are missing, e.g., regarding SSIM and IS \\nscores\\n- The figures and tables should be rearranged so that they appear close to where \\nthey are discussed in text. A single table should not include results from \\ndifferent experiments (as done in Table 2)\\n- Retrieval experiments should include comparison against a baseline model and \\ncompetitive comparison models to put the proposed models' scores into context\\n- The Comparison evaluation (in 4.2) is insufficiently described. What is the \\nazimuth factor? From Figure 5 the difference in performance between the systems \\nis not obvious. What do the models predict? What changes along the rows?\\n- Last paragraph on p. 6 refers to Figure 7, you probably mean Figure 1.\\n- good intuitive explanation of geometry prior in last paragraph page 6 (\\\"The \\nlearnt structural heatmaps...\\\") This explanation would be useful to have earlier \\nin the paper.\\n- Will the 18-point landmark annotated data be made available?\\n- last paragraph of page 7 is very hard to follow\", \"rating\": \"2: Marginally below acceptance threshold\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}", "{\"title\": \"Two-Branch autoencoder model for simultaneous content and style unsupervised learning\", \"review\": \"In this paper, a model based on autoencoder framework is proposed to disentangle object\\u2019s representation by content and style in an unsupervised manner. The model contains two branches, one for content and the other for style. The structural content branch looks at the structural points to capture the object geometry, while the style branch learns the style representation. The objective function contains the prior loss, reconstruction loss and the KL loss, which is an extension of the traditional VAE framework.\\n\\nExperiments show that the proposed model can to some extent produce representations capturing both content and style. In particular, a content representation of the query image and the style representation of the reference one can output an image maintaining the geometric information of the query while having the style of the reference. In terms of quantitative results, the proposed method also outperforms existing methods wrt SSIM and IS score.\", \"rating\": \"3: Marginally above acceptance threshold\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}" ] }
S1xim8UFuV
Point Cloud GAN
[ "Chun-Liang Li", "Manzil Zaheer", "Yang Zhang", "Barnabás Póczos", "Ruslan Salakhutdinov" ]
Generative Adversarial Networks (GAN) can achieve promising performance on learning complex data distributions on different types of data. In this paper, we first show that a straightforward extension of an existing GAN algorithm is not applicable to point clouds, because the constraint required for discriminators is undefined for set data. We propose a two fold modification to a GAN algorithm to be able to generate point clouds (PC-GAN). First, we combine ideas from hierarchical Bayesian modeling and implicit generative models by learning a hierarchical and interpretable sampling process. A key component of our method is that we train a posterior inference network for the hidden variables. Second, PC-GAN defines a generic framework that can incorporate many existing GAN algorithms. We further propose a sandwiching objective, which results in a tighter Wasserstein distance estimate than the commonly used dual form in WGAN. We validate our claims on the ModelNet40 benchmark dataset and observe that PC- GAN trained by the sandwiching objective achieves better results on test data than existing methods. We also conduct studies on several tasks, including generalization on unseen point clouds, latent space interpolation, classification, and image to point clouds transformation, to demonstrate the versatility of the proposed PC-GAN algorithm.
[ "Point Cloud", "GAN" ]
Accept
https://openreview.net/pdf?id=S1xim8UFuV
https://openreview.net/forum?id=S1xim8UFuV
ICLR.cc/2019/Workshop/DeepGenStruct
2019
{ "note_id": [ "SJe2QN_w9N", "rkl9XBCfcN", "HkeX63dzqE" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1555690531917, 1555387682495, 1555365050643 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/DeepGenStruct/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper2/AnonReviewer1" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper2/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\", \"comment\": \"Accepted\"}", "{\"title\": \"Interesting generative model for set data\", \"review\": \"This paper proposes a new generative model for unordered data, with a particular application to point clouds. This model includes an inference method and and a novel objective function based on sandwiching the Wasserstein distance between an upper and lower bound. The paper includes a clear description of the problem and motivation and strong experimental results, both quantitative and qualitative.\", \"pros\": [\"Novel, reasonable model\", \"Clear writing\", \"Strong evaluation and results\", \"Interesting new objective based on Wasserstein bounds\"], \"cons\": [\"For the practical problem of 3D shape modeling, could compare to other representations e.g. Surface Networks: http://openaccess.thecvf.com/content_cvpr_2018/papers/Kostrikov_Surface_Networks_CVPR_2018_paper.pdf\"], \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}", "{\"title\": \"review\", \"review\": \"This paper proposes using GAN to generate 3D point cloud. In addition, it introduces a new \\\"sandwiching\\\" objective which is basically averaging the upper and lower bound of Wasserstein distance between distributions.\\n\\nAlthough the problem this paper addresses is very important (generating 3D shapes), it has the following flaws:\\n1. training an inference network to model the latent variable has been done many times in the literature.\\n2. I am not an expert in this domain but the datasets in this paper seem to be a bit toy-ish.\\n3. Some more experiments need to be done on verifying the new objective.\", \"rating\": \"3: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}" ] }
rJl97IIt_E
Deep Random Splines for Point Process Intensity Estimation
[ "Gabriel Loaiza-Ganem", "John P. Cunningham" ]
Gaussian processes are the leading class of distributions on random functions, but they suffer from well known issues including difficulty scaling and inflexibility with respect to certain shape constraints (such as nonnegativity). Here we propose Deep Random Splines, a flexible class of random functions obtained by transforming Gaussian noise through a deep neural network whose output are the parameters of a spline. Unlike Gaussian processes, Deep Random Splines allow us to readily enforce shape constraints while inheriting the richness and tractability of deep generative models. We also present an observational model for point process data which uses Deep Random Splines to model the intensity function of each point process and apply it to neuroscience data to obtain a low-dimensional representation of spiking activity. Inference is performed via a variational autoencoder that uses a novel recurrent encoder architecture that can handle multiple point processes as input.
[ "splines", "VAE", "random functions", "point processes", "neuroscience" ]
Accept
https://openreview.net/pdf?id=rJl97IIt_E
https://openreview.net/forum?id=rJl97IIt_E
ICLR.cc/2019/Workshop/DeepGenStruct
2019
{ "note_id": [ "BJgHEVuv94", "r1e4PGx49V", "H1e2XwOit4" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1555690540957, 1555460699902, 1554904868135 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/DeepGenStruct/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper1/AnonReviewer1" ], [ "ICLR.cc/2019/Workshop/DeepGenStruct/Paper1/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\"}", "{\"title\": \"A combination of a neural network with input random noise to model one-dimensional random functions using splines\", \"review\": \"The paper presents a method to model univariate functions in the interval [T1, T2] that makes use of splines having random parameters modelled by a neural network. The overall method is suitable only for modelling univariate functions (e.g. when the input is time) and the authors apply this model to Poisson point processes. A nice feature of the approach is that the spline-based formulation allows to enforce non-negativity constraints to the random functions. Thus, such random functions can be used to model the intensity of Poisson processes without needing to impose a non-negativity transformation.\\n\\nI think that the claim in the introduction that the proposed Deep Random Splines is an alternative to Gaussian processes \\nis an overstatement. This is because the current method is only suitable for univariate functions and even trying to extend it to two-dimensional input spaces is challenging. I would suggest the authors to extend their method to two and three-dimensional domains so that their algorithm can have a bit broader applicability in some spatio-temporal problems.\", \"rating\": \"3: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Deep Random Splines, a class of random functions that can incorporate constraints such as non-negativity, monotonicity, or convexity.\", \"review\": \"This paper presents the DRS, a class of random functions. In contrast to GPs, DRS uses splines to define the function; in particular it defines a piecewise function using a spline on each interval, making sure that the overall function is continuous (and so are its derivatives). The parameters of each spline are modeled using a deep neural network that takes Gaussian noise as input. Importantly, the parameters of each spline can be chosen so that the modeled function satisfies certain desired shape properties, such as non-negativity, monotonicity, or convexity. As a use case, the paper shows an example of application that uses non-negative splines to model the rate of a Poisson process, analogous to the log-Gaussian Cox process. Inference is carried out with amortized variational inference.\\n\\nI found this is a strong paper and therefore recommend acceptance. It is well explained and the method is of interest to the community.\\n\\nOne question that I had is how this model performs/scales with the dimensionality of the output space (i.e., the analogous to the multi-output GP).\\n\\nAnother interesting point would be to compare against the log-Gaussian Cox process in the experiments.\\n\\nAs a minor comment, from page 3 I didn't understand why the Q matrices are of size 2x2 when d=3. On page 2, it says that each matrix Q if of size k-by-k when d=2k+1, so for d=3 shouldn't we obtain k=1?\", \"rating\": \"5: Top 15% of accepted papers, strong accept\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}" ] }
S1xnKi5BOV
A Simple yet Effective Baseline for Robust Deep Learning with Noisy Labels
[ "Anonymous" ]
Recently deep neural networks have shown their capacity to memorize training data, even with noisy labels, which hurts generalization performance. To mitigate this issue, we propose a simple but effective method that is robust to noisy labels, even with severe noise. Our objective involves a variance regularization term that implicitly penalizes the Jacobian norm of the neural network on the whole training set (including the noisy-labeled data), which encourages generalization and prevents overfitting to the corrupted labels. Experiments on noisy benchmarks demonstrate that our approach achieves state-of-the-art performance with a high tolerance to severe noise.
[ "Learning with noisy labels", "generalization of deep neural networks", "robust deep learning" ]
Reject
https://openreview.net/pdf?id=S1xnKi5BOV
https://openreview.net/forum?id=S1xnKi5BOV
ICLR.cc/2019/Workshop/LLD
2019
{ "note_id": [ "Byx_W62zcV", "S1grdN49KV", "r1lf1kprY4" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1555381504211, 1554822253021, 1554530009799 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/LLD/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/LLD/Paper68/AnonReviewer1" ], [ "ICLR.cc/2019/Workshop/LLD/Paper68/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Reject\", \"comment\": \"Reviewers found issue with the novelty, further clarification of novelty is needed\"}", "{\"title\": \"Impressive performance but contributions unclear\", \"review\": \"The authors propose a method for learning deep neural networks under label noise by regularizing the Jacobian of the network as a first-order approximation of the variance between perturbed examples. They argue that this will mitigate overfitting to mislabeled examples and produce smoother decision boundaries. They evaluate their methods by comparing to other noise-robust techniques on CIFAR-10 and CIFAR-100.\\n\\nThe authors present a complete piece of work, from a summary of noise issues in deep learning, to a description of their variance regularization technique, to their experiments. The paper follows a logical structure, and is generally well written. It would benefit from a clearer discussion of related works in the main body.\\n\\nThe regularization term they use is sensible, and produces strong results on the datasets in comparison to the selected methods. The results are continued in the Appendix in the data-dependent noise case. Had the authors focused on empirical variance regularization results for image classification, this would have been a much stronger paper.\\n\\nHowever, the authors don't deliver on their two main stated contributions. The first contribution - analysis of solution smoothness and subspace dimensionality for noisily trained models - is not presented. They reference past works (e.g. Sokolic et al., 2016) but do not provide an original analysis or evaluation as implied (\\\"we show empirically that the objective can learn a model with low subspace dimensionality and low hypothesis complexity\\\"); their experiments solely measure classification performance. The authors' second contribution is to \\\"propose a novel approach for training with noisy labels. \\\". However, variance regularization via data augmentation has been previously proposed (\\\"Regularization With Stochastic Transformations and Perturbations for Deep Semi-Supervised Learning\\\", Sajjadi et al., 2016), as has Jacobian regularization (\\\"Robust Large Margin Deep Neural Networks\\\", Sokolic et al., 2016). Jacobian regularization and related methods have also been previously analyzed in the noisy label setting (\\\"Gradient Regularization Improves Accuracy of Discriminative Models\\\", Varga et al., 2017). If the authors' proposed method is significantly different from these approaches, it is not clear from the paper.\\n\\nAgain, had this paper focused on empirical evaluation of Jacobian regularization for deep image recognition as compared to other methods, it would have been stronger. In its current form, the contributions are unclear.\", \"rating\": \"2: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Interesting method with excitingly good empirical results; Paper presentation can be improved further\", \"review\": \"This paper presents a method to train deep NN with noisy labels by adding a variance regularization term. The authors show/derive that such variance regularization term is an unbiased estimator of Jacobian norm of the neural network. As previous literature (Sokolic et al; Novak et al. 2018) show that this Jacobian norm is highly related to the generalization performance of NN, the authors conclude that minimizing their proposed variance regularization term can also help model generalization. Finally, the authors conduct experiments on CIFAR-10 AND CIFAR-100 datasets and show their proposed learning objective can be used to train a NN robust to noise. The experimental results are strikingly good.\\n\\nOverall the paper is interesting, the major question is why EXACTLY minimizing the norm of Jacobian of NN can help improve the model robustness. I am not very familiar with the work (Sokolic et al; Novak et al. 2018) and thus I would appreciate a (high-level) description of those previous studies' major ideas. Furthermore, I am wondering why the authors treat good \\\"generalization\\\" of NN as the same as good \\\"robustness\\\" of NN. The authors describe label noise in Section 2.1 but I don't see how such label noise is modeled later in the regularization term design. Finally, I am a little bit confused about the derivation of the equation between equation (3) and (4), from lim_{\\\\tau -> 0} ... to \\\\frac{1}{N} \\\\sum_{i=1}^{N} Tr(...). More explanation on this part is appreciated. \\n\\nThe structure of the paper is good and writing is overall clear. There are still some places can be improved. First, at the begining of section 2, the authors write a K-class classifier f from ..., it's better to explicitly state the function f is from R^{d} to R^{K}, instead of being a scalar-valued function. This can make later discussion on Jacobian matrix more clear. Second, the connection of section 2.1 with later parts is not clear. Finally, there are some typos listed below:\\n1. in the third line of section 2.1, there are two contiguous \\\",\\\"s .\\n2. in the last line of page 2, there are two contiguous \\\",\\\"s .\\n3. in the second line of page 3, there are two contiguous \\\",\\\"s .\\n4. in the second line of section 4.1, what are the \\\"Sec. 5.5 and Sec. 5.6\\\"?\", \"rating\": \"3: Marginally above acceptance threshold\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}" ] }
Sklsts5H_E
Deep Generative Inpainting with Comparative Sample Augmentation
[ "Boli Fang", "Miao Jiang", "Jerry Shen", "Bjord Stenger" ]
Recent advancements in deep learning techniques such as Convolutional Neural Networks(CNN) and Generative Adversarial Networks(GAN) have achieved breakthroughs in the problem of semantic image inpainting, the task of reconstructing missing pixels in given images. While much more effective than conventional approaches, deep learning models require large datasets and great computational resources for training, and inpainting quality varies considerably when training data vary in size and diversity. To address these problems, we present in this paper a inpainting strategy of \textit{Comparative Sample Augmentation}, which enhances the quality of training set by filtering out irrelevant images and constructing additional images using information about the surrounding regions of the images to be inpainted. Experiments on multiple datasets demonstrate that our method extends the applicability of deep inpainting models to training sets with varying sizes, while maintaining inpainting quality as measured by qualitative and quantitative metrics for a large class of deep models, with little need for model-specific consideration.
[ "Image Inpainting", "Various Datasets" ]
Reject
https://openreview.net/pdf?id=Sklsts5H_E
https://openreview.net/forum?id=Sklsts5H_E
ICLR.cc/2019/Workshop/LLD
2019
{ "note_id": [ "Hygihr1FKN", "HJlBSMjwYN", "HJxnwMG8FE" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1554736563455, 1554653757005, 1554551396242 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/LLD/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/LLD/Paper66/AnonReviewer1" ], [ "ICLR.cc/2019/Workshop/LLD/Paper66/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Requires better justification, analysis, and experiments\", \"review\": \"Summary:\", \"help_image_inpainting_using_gans_by_two_strategies\": \"1) Comparative Augmenting filter: choose images from a training dataset whose histogram is similar to the image in consideration. Histogram matching is an old trick in the computer vision community.\\n\\n2) Self-Enrichment: add random noise to each pixel. This seems to be the same as \\\"Instance Noise\\\" [1], which the authors did not cite but claim as their own.\\n\\nThe authors motivate their strategy by saying older methods don't work well in case of non-repetitive backgrounds such as faces, but themselves rely on a global similarity like histogram matching. Highly doubt if this can theoretically work. Results in the paper show that practically the improvement is negligible.\", \"authors_mention_3_contributions_but_do_not_justify_their_claims\": \"1) Histogram matching - authors don't mention that it is an old trick, and don't justify why this could work. Also, practical results show that it doesn't.\\n2) Instance noise - authors don't cite an older paper that proposed the same. Also, practical results show that it doesn't.\\n3) Authors mention \\\"detailed set of experiments\\\" in the introduction but only include 1 in the experiment section, and say they could not add more due to time constraints.\", \"literary_errors\": \"There are quite a few word-level errors such as word redundancy, sentence errors, spelling mistakes that make the paper difficult to read.\\n\\n[1] \\\"Instance Noise: A trick for stabilising GAN training\\\" https://arxiv.org/abs/1610.04490\", \"rating\": \"1: Strong rejection\", \"confidence\": \"3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Not clear contributions; not even clear method\", \"review\": \"The authors propose an augmentation method for image inpainting; the core idea lies in filtering out irrelevant images and augmenting the images by adding random normal noise in the pixel space.\\n\\nEven though potentially, the method has its merits, the method is not clearly written, while the paper is rather confusing (please see below).\", \"major_points\": \"1) There is no formal definition of the task in the paper (i.e. they interchangeably just mention inpainting or semantic inpainting without any definition). In conjunction with the lack of clear explanation of their method, understanding the contributions is challenging. \\n\\n2) It is not clear what type of GAN the authors use. They mention WGAN-GP (Gulrajani et al), but it is not mentioned whether they generate from scratch the images or whether this is a conditional GAN (as traditional inpainting methods). Therefore, it is not clear how GAN is used in this work.\\n\\n3) The comparative augmentation filter seems like a KNN in the pixel-space; given question 2, it is not clear where this filter fits in the training method.\\n\\n4) The authors mention that their method 'extends the applicability of deep inpainting methods', but in the end only experiment in a 'restricted-CIFAR'; the rest experiments of Celeb-A and Places are not included. \\n\\n5) The propose augmentation method, i.e. adding random normal noise per pixel is not novel; in addition, there is no ablation experiment demonstrating the benefits experimentally.\\n\\n6) Why do the authors propose to filter the images in the pixel space and not a perceptual space as popular the last few years?\", \"minor_points\": \"1) The following expressions should be more rigorous: \\n - 'becomes 0 if and only if P = Q almost surely'.\\n - 'produces better or on-par images no later than the original GAN'.\\n - 'and ones that utilize [...] latent space.'\\n\\n2) The authors mention that Yu et al 'fail at more complex inpainting images such as faces and natural scenery'. Do they have some visual examples that contradict the original paper? Because in the original paper, the methods perform well in both domains. \\n\\n3) In Fig. 1 and 2, what is the '/workshop_format/[...].png'? \\n\\n4) In table 1, l1 and l2 errors are mentioned, but there is a single number; which also includes an undefined percentage. Could the authors clarify what they mean? \\n\\n5) The authors do not mention how much they augment their original images, i.e. for every original image how many images are used during training? Is the noise sampled per image or per batch? \\n\\nGiven the major improvement points; the paper should be re-written before being accepted to the workshop.\", \"rating\": \"2: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}" ] }
rJgqFi5rOV
SHREWD: Semantic Hierarchy Based Relational Embeddings For Weakly-Supervised Deep Hashing
[ "Heikki Arponen", "Tom E Bishop" ]
Using class labels to represent class similarity is a typical approach to training deep hashing systems for retrieval; samples from the same or different classes take binary 1 or 0 similarity values. This similarity does not model the full rich knowledge of semantic relations that may be present between data points. In this work we build upon the idea of using semantic hierarchies to form distance metrics between all available sample labels; for example cat to dog has a smaller distance than cat to guitar. We combine this type of semantic distance into a loss function to promote similar distances between the deep neural network embeddings. We also introduce an empirical Kullback-Leibler divergence loss term to promote binarization and uniformity of the embeddings. We test the resulting SHREWD method and demonstrate improvements in hierarchical retrieval scores using compact, binary hash codes instead of real valued ones, and show that in a weakly supervised hashing setting we are able to learn competitively without explicitly relying on class labels, but instead on similarities between labels.
[ "Deep Hashing", "Content Base Image Retrieval", "Semantic image relations", "Weakly Supervised learning", "Representation Learning" ]
Accept
https://openreview.net/pdf?id=rJgqFi5rOV
https://openreview.net/forum?id=rJgqFi5rOV
ICLR.cc/2019/Workshop/LLD
2019
{ "note_id": [ "H1lE4a3fcN", "HkekzXkbqV", "H1lHi3l_FE" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1555381547824, 1555260166792, 1554676893039 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/LLD/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/LLD/Paper65/AnonReviewer1" ], [ "ICLR.cc/2019/Workshop/LLD/Paper65/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\"}", "{\"title\": \"Nice workshop contribution\", \"review\": \"The paper proposes to use semantic hierarchies to estimate distance metrics between sample labels instead of the standard 0/1 similarity values. The semantic distance is used as an additional supervision signal to regularize deep nets.\\n\\nThe topic is relevant and the paper could of interest to the workshop attendees. The approach is intuitive, described well, and the experimental results seem to be comprehensive (for the CIFAR-100 and ImageNet datasets they were carried on). It's also nice to see authors including an ablation study for their method.\", \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Interesting and excellent work\", \"review\": \"Summary: This paper introduces three loss terms (L_cls, L_KL, L_sim), and shows performances each loss term. This work achieves reasonable performances without explicit class labels. I accept this paper.\", \"notes\": [\"The paper introduces the semantic similarity loss which is to learn similarity in the embedding space given similarity between targets. In the ablation study, the loss function plays an important role in improving results.\", \"Also, they introduce the KL divergence term between the embedding distribution and binary target distribution. The paper also shows that the performances without the KL loss. With the KL loss, the proposed model shows substantial improvement.\", \"The paper did experiments on different hash code lengths and showed better performances on the longer hash code\", \"I strongly accept this paper. This paper introduces novel loss functions and demonstrates that their method shows improvement in fully supervised and weakly supervised settings.\"], \"rating\": \"5: Top 15% of accepted papers, strong accept\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}" ] }
B1ecYsqSuN
INCORPORATING BILINGUAL DICTIONARIES FOR LOW RESOURCE SEMI-SUPERVISED NEURAL MACHINE TRANSLATION
[ "Mihir Kale", "Sreyashi Nag", "Varun Lakshinarasimhan", "Swapnil Singhavi" ]
We explore ways of incorporating bilingual dictionaries to enable semi-supervised neural machine translation. Conventional back-translation methods have shown success in leveraging target side monolingual data. However, since the quality of back-translation models is tied to the size of the available parallel corpora, this could adversely impact the synthetically generated sentences in a low resource setting. We propose a simple data augmentation technique to address both this shortcoming. We incorporate widely available bilingual dictionaries that yield word-by-word translations to generate synthetic sentences. This automatically expands the vocabulary of the model while maintaining high quality content. Our method shows an appreciable improvement in performance over strong baselines.
[ "bilingual dictionaries", "neural machine translation", "low resource", "ways", "conventional", "methods", "success", "quality", "models" ]
Accept
https://openreview.net/pdf?id=B1ecYsqSuN
https://openreview.net/forum?id=B1ecYsqSuN
ICLR.cc/2019/Workshop/LLD
2019
{ "note_id": [ "SJlQvanz5V", "S1geNV6BKN", "BkghQVbBYE" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1555381595018, 1554531368271, 1554482212103 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/LLD/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/LLD/Paper64/AnonReviewer1" ], [ "ICLR.cc/2019/Workshop/LLD/Paper64/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\"}", "{\"title\": \"Good paper. Approach is simple and intuitive, yet effective. The paper lacks a comparison to/discussion with a closely related work. Requires a few formatting fixes\", \"review\": \"Paper summary:\\nThis paper targets machine translation of low-resource languages, where the main problem of current approaches is dealing with Out-Of-Vocabulary words. Given a small parallel corpus of the source and target language, the authors propose a data augmentation technique using dictionaries of the source-target languages. Specifically, given a relatively small parallel corpus of source and target sentences, and given a new set of sentences in the target language (English, in this work) and a dictionary from the target to the source language, the set of sentences are translated word by word using the dictionary the source language (Germen, and Spanish in this work), then the original sentence and the resulting sentences are added to target and source datasets in the parallel corpus, respectively.\", \"the_authors_compare_their_proposed_methods_to_two_existing_techniques\": \"Back translation, COPY which copies OOV words into the sentence without translation. In their various experiments, the results convey the effectiveness of the proposed approach where it achieves an increase in the BLEU score.\\n\\nPros.\\n1-\\tThe paper suits the workshop domain.\\n2-\\tThe proposed approach is simple, yet it performs on par with the other baseline methods. \\n3-\\tThe proposed model is evaluated in different scenarios, and the experimental details are provided in the paper. \\n4-\\tGenerally, the paper is well-written and easy to follow. \\n5-\\tThe authors discussed how their proposed method has higher coverage on both the target and source languages, in contrast to the COPY method which targets only the target language. I think this is an important contribution and could replace the third contribution.\\n6-\\tThe authors discussed one of the potential side effects that word-by-word translation can cause, which is the syntax/grammar correctness of the resulting sentence. \\nCons.\\n1-\\tI believe this paper should include a comparison with Sennrich et al 2015 below or at least a discussion on why it was excluded. This work was proposed to address the same problem that the authors target. In this work, sub-words are used as the tokens for translation in order to address the OOV problem. Specifically, an external dataset is used to get a list of most common subwords instead of full words. \\nSennrich, R., Haddow, B., & Birch, A. (2015). Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909.\\n\\n2-\\tUsing a dictionary for word by word translation is very similar to the idea of using synonyms to mask a writing style. Both these ideas result in not only syntactic issues but semantic ones as well. For example, \\u2018Kicked the bucket\\u2019 which means \\u2018passed away\\u2019 will lose its meaning if translated word by word. The syntactic part was covered properly in Section 4, while the semantic part was partially covered in Section 5.4 where the authors discuss domain adaptation. In my opinion, (and of course as the results show) using COPY with the authors\\u2019 method should be elaborated more, both in the discussion and the experiments. \\n\\nAdditional minor (formatting) issues:\\n-Table 1 is not clear. First, since the study is performed on two languages, the caption should specify this example is on which language. Second, since different approaches target a different part of the corpus (either the source or the target language) I suggest separating them either in two smaller tables or by having an empty row. What I see in this table is an alternation between languages and I find it a bit confusing. \\n-Tables 2 and 3. Please add \\u2018using BLUE score\\u2019 to the captions. That would be faster to spot compared to looking for it in the text.\", \"rating\": \"3: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Simple technique for improving low-resource translation when bilingual dictionaries are given.\", \"review\": \"This paper investigates the idea of using bilingual dictionaries to create synthetic sources for target-side monolingual data in order to improve over NMT models trained with small amounts of parallel data.\\nThis strategy is compared with back-translation and copying the target to the source side and evaluated on TED data for de-en and es-en in a simulated low-resource and a domain adaptation setting. The empirical results show that when little parallel data is available in addition to bilingual dictionaries, this method can outperform back-translation and copying.\", \"pros\": [\"Written clearly\", \"Reproducible (hyperparameters, data)\", \"Evaluation shows improvements of the proposed model over baselines, despite the simplicity of the data and the noise in the sources.\", \"Effect of data sizes are studied.\", \"Good review of related work.\"], \"cons\": [\"The low-resource setting is only simulated. It would have been to take a truely low-resource language and evaluate the methods on that (e.g. the other language pairs presented in Qi et al. 2018).\", \"The requirement of bilingual dictionaries and their coverage and their domain dependence is not discussed. If little parallel data is available, can we simply assume the existence of large dictionaries?\", \"It is assumed that the word-by-word dictionary translation \\\"at least ensures that the words in the synthetic sentences are accurate\\\" (\\u00a74). This is critical since it ignores the problem of polysemy - one word in the target language can often have more than one meaning in the source language: which one is picked for generating the synthetic sentence?\", \"In summary, despite its clarity and simplicity, I don't find the paper very creative regarding the methodology, and it does not sufficiently answer the question when dictionaries outperform back-translation, since the properties of the additional resource, i.e. the dictionary, are not discussed/investigated, neither its limitations. It would have been interesting to see the same approach in a truely low-resource problem where the dictionary might be limited as well.\"], \"details\": [\"Consider changing the acronym of the method, it seems widely adopted for World of Warcraft.\", \"Table 1 has encoding problems for \\u00e1, \\u00f2 etc.\"], \"rating\": \"3: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
S1xKYsqB_E
Learning Spatial Common Sense with Geometry-Aware Recurrent Networks
[ "Hsiao-Yu Tung", "Ricson Cheng", "Katerina Fragkiadaki" ]
We integrate two powerful ideas, geometry and deep visual representation learning, into recurrent network architectures for mobile visual scene understanding. The proposed networks learn to “lift” 2D visual features and integrate them over time into latent 3D feature maps of the scene. They are equipped with differentiable geometric operations, such as projection, unprojection, egomotion stabilization, in order to compute a geometrically-consistent mapping between the world scene and their 3D latent feature space. We train the proposed architectures to predict novel image views given short frame sequences as input. Their predictions strongly generalize to scenes with a novel number of objects, appearances and configurations, and greatly outperform predictions of previous works that do not consider egomotion stabilization or a space-aware latent feature space. Our experiments suggest the proposed space-aware latent feature arrangement and egomotion-stabilized convolutions are essential architectural choices for spatial common sense to emerge in artificial embodied visual agents.
[ "spatial common sense", "recurrent networks", "egomotion stabilization", "latent feature space", "powerful ideas", "geometry", "recurrent network architectures", "networks" ]
Accept
https://openreview.net/pdf?id=S1xKYsqB_E
https://openreview.net/forum?id=S1xKYsqB_E
ICLR.cc/2019/Workshop/LLD
2019
{ "note_id": [ "S1xKyLyKKE", "HyeSRogOYV", "rJgwL3sDKN" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1554736608943, 1554676684749, 1554656334990 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/LLD/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/LLD/Paper63/AnonReviewer2" ], [ "ICLR.cc/2019/Workshop/LLD/Paper63/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\"}", "{\"title\": \"Article evaluates an embedding of 2d-images that has learnt to predict the change in viewpoint (Rotation+translation) of 3D surfaces. The learning is done on synthetic 3D datasets.\", \"review\": [\"Interesting article on evaluating camera's egoview based embedding of images into a 3D latent space that can reconstruct the 3D viewpoint of surfaces in the scene. Article quite short and dense and difficult to understand all the details.\", \"How does the 3D-Unet evaluate the Ego-motion of the camera viewpoint ? Given the article is short, explanation on the architecture are quite fundamental.\", \"Does the 3D latent space resemble any shape or surface ? Or is the latent space representation abstract ?\", \"How are the intrinsic and extrinsic parameters of the camera evaluated in this setup ? We require these parameters to be estimated when evaluating a homography between two views.\", \"Though the number of labels during training required are small, we still require a rich set of 3D object datsets and different viewpoints generated from them. Can you provide an idea of how this architecture generalizes to new classes and shapes of objects in images.\", \"An idea of memory consumption for such architectures would be quite useful.\"], \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Good design, great results\", \"review\": \"Summary:\\nThe authors design an architecture for building a geometry-aware latent representation of a scene, and test this by querying a view from the latent representation and comparing it with the actual view.\\n\\nThe paper is structured well, it is clear on what the authors would like to say. Recently geometry-aware architectures have gained importance, and the task for which the authors have shown results is promising.\\n\\nThe authors designed a geometry-aware deep neural network, and trained it on multiple (but limited) viewpoints of multiple scenes to make a 3D latent representation of each scene. They then query a novel viewpoint of each scene and make a prediction loss on that to train the network in a self-supervised manner.\\n\\nAlthough it is \\\"self-supervised\\\", it is not clear whether the \\\"novel\\\" viewpoints used while training are within the training set, because if they are not, then the training data is actually as big as the number of iterations used to train. It would be helpful if the authors could clarify on this.\\n\\nThe authors show that their architecture is able to give quality results for scenes with more objects than those on which the network was trained, which provides evidence to the generalizability of the network. This is a very good result, provided we are clear on the \\\"limited\\\"-ness of the training data.\\n\\nThe authors have compared their results with those of another recent architecture that tried to tackle the same problem. The results seem to be in favour of this paper, especially in the case of more objects in the scene. It is worth noting that the other method was not geometry-aware by design, as this paper is.\", \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}" ] }
HyxYFjqHd4
Interactions between Representation Learning and Supervision
[ "Valliappa Chockalingam" ]
Representation learning is one of the fundamental problems of machine learning. On its own, this problem can be cast as an unsupervised dimensionality reduction problem. However, representation learning is often also used as an implicit step in supervised learning (SL) or reinforcement learning (RL) problems. In this paper, we study the possible "interference" supervision, commonly provided through a loss function in SL or a reward function in RL, might have on learning representations, through the lens of learning from limited data and continual learning. Particularly, in connectionist networks, we often face the problem of catastrophic interference whereby changes in the data distribution cause networks to fail to remember previously learned information and learning representations can be done without labeled data. A primary running hypothesis is that representations learned using unsupervised learning are more robust to changes in the data distribution as compared to the intermediate representations learned when using supervision because supervision interferes with otherwise "unconstrained" representation learning objectives. To empirically test hypotheses, we perform experiments using a standard dataset for continual learning, permuted MNIST. Additionally, through a heuristic quantifying the amount of change in the data distribution, we verify that the results are statistically significant.
[ "representation learning", "problem", "supervision", "representations", "data", "continual learning", "data distribution", "interactions" ]
Reject
https://openreview.net/pdf?id=HyxYFjqHd4
https://openreview.net/forum?id=HyxYFjqHd4
ICLR.cc/2019/Workshop/LLD
2019
{ "note_id": [ "SJeYERnf54", "S1gmK6tdY4", "BylW7o4dKE" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1555381808549, 1554713979429, 1554692889367 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/LLD/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/LLD/Paper62/AnonReviewer2" ], [ "ICLR.cc/2019/Workshop/LLD/Paper62/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Reject\", \"comment\": \"reivewers found no proposed method and the current empirical analysis too narrow\"}", "{\"title\": \"Initial empirical evaluation on the robustness of learned representation to distribution shift\", \"review\": \"Summary\\n\\nIn this paper, the authors ask the question on if we lose general representation by implicitly performing representation learning in supervised settings. In order to empirically answer this question, the authors empirically demonstrate the following 2 observations using permuted MNIST dataset:\\n\\n1. Representation achieved from unsupervised learning can support fine tuning classifiers for different data distribution and achieve equally well performance. However, representation implicitly derived from supervised learning demonstrates degraded performance for fine-tuning classifier on new data distribution.\\n\\n2. By using representation achieved from unsupervised learning, a classifier can adapt to an old data distribution after being trained on new distributions, more quickly than using representation derived from supervised learning.\", \"comments\": \"1. Technical question: In the computer vision literature, it is well recognized representation learned from supervised training (e.g. using imagenet) generalizes well when fine tuning other dataset in image recognition. However, to the best of my knowledge, there is not too much work on transfer representation from unsupervised learning to other recognition tasks. I was wondering if there is a trade-off between the performance of unsupervised representation and and its generality when the target goal is to achieve better performance for downstream tasks. \\n\\n2. Quality: In order to better demonstrate the investigation, I would suggest evaluate the learned representation on more different tasks. This will help better validate the claim that unsupervised learning can achieve better generally performant representation. Currently the validation focus on synthetically modified distribution. But I think this is fine as initial work for the workshop.\\n\\n3. Related work: I would suggest citing literatures in NLP and vision on using supervised/unsupervised representation (e.g. word embeddings, and transfer deep feature extractors in image recognition and other tasks.)\", \"rating\": \"3: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Review of \\\"Interactions between Representation Learning and Supervision\\\"\", \"review\": \"Summary of the paper:\\n\\nThis paper tries to propose an empirical investigation of the interference/side effect an intermediate representation learning step might have on the supervised learning process.\\n\\n\\nReviewer\\u2019s assessment:\\nBesides the fact that the problem is not exposed clearly (even in the introduction) it is particularly hard to make sense of the \\u201cProposed Approach\\u201d, and in general to make sense of the whole point of the paper. The paper is fairly hard to read and is filled with statements having no connection with the previous claims.\\nLast, for such type of work, a broader diversity of numerical experiments is usually expected.\\nHence, I cannot recommend to accept this paper.\", \"rating\": \"1: Strong rejection\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}" ] }
r1ldYi9rOV
Few-Shot Regression via Learned Basis Functions
[ "Yi Loo", "Swee Kiat Lim", "Gemma Roig", "Ngai-Man Cheung" ]
The recent rise in popularity of few-shot learning algorithms has enabled models to quickly adapt to new tasks based on only a few training samples. Previous few-shot learning works have mainly focused on classification and reinforcement learning. In this paper, we propose a few-shot meta-learning system that focuses exclusively on regression tasks. Our model is based on the idea that the degree of freedom of the unknown function can be significantly reduced if it is represented as a linear combination of a set of appropriate basis functions. This enables a few labelled samples to approximate the function. We design a Feature Extractor network to encode basis functions for a task distribution, and a Weights Generator to generate the weight vector for a novel task. We show that our model outperforms the current state of the art meta-learning methods in various regression tasks.
[ "Few-Shot Leaning", "Regression", "Learning Basis Functions", "Few-Shot Regression" ]
Accept
https://openreview.net/pdf?id=r1ldYi9rOV
https://openreview.net/forum?id=r1ldYi9rOV
ICLR.cc/2019/Workshop/LLD
2019
{ "note_id": [ "r1ll_Fm3KN", "S1lolYlnt4", "Ske0LUqPFE" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1554950503637, 1554938099135, 1554650709817 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/LLD/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/LLD/Paper61/AnonReviewer1" ], [ "ICLR.cc/2019/Workshop/LLD/Paper61/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\"}", "{\"title\": \"Interesting method proposed\", \"review\": \"The paper proposes a method that learns a regression model with a few samples.\", \"pros\": [\"It is an interesting application.\", \"Original work and clearly explained. Mathematically sound.\", \"It outperforms other methods.\"], \"cons\": [\"Just a few examples in the results section. Part of the results were attached as appendices. Looking at the results, my question would be how the different models compared in Tables 1 and 2 perform in the different regression data sets. Only one model is compared for each regression data set.\"], \"rating\": \"3: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"This paper proposes an interesting way learn spasre feature representation to address small sample regression problems.\", \"review\": \"This paper proposes a shot-learning method for small sample regression problems. Each regression problem consists of several tasks, each defines an input and output relation. Given samples of different tasks, the goal is to learn these task-dependent relations. The idea is first to learn a sparse feature representations which is task-independent. Then using the feature, output, and task label information to learn a task-specific mapping from the feature to output. These two steps are realized by Feature extractor and Weight generator. When given a new task, one needs to learn the task label of input mapping from the feature to output. This requires to output samples specific to the new task, which is done by task label generator.\\n\\nThe method seems to be novel and it works well on several regression problems. The idea of sparsity seems to be essential to achieve good estimation. It deserves further understanding. Following the result in Table 1 and 2, the Task Label and its generator plays an important role in 1d case. In 2d, the result is less conclusive since the confidence interval is too big to compare tasks with task label generator and no task label generator, not sure if it is due to the task label generator\\u2019s error.\\n\\nOverall, the idea and results are interesting. The effects of adding task label could also be done on the two new regression problems. This would be interesting to discuss as well.\", \"rating\": \"3: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}" ] }
SJedYj5ruV
Weak Supervision for Time Series: Wearable Sensor Classification with Limited Labeled Data
[ "Saelig Khattar", "Hannah O’Day", "Paroma Varma", "Jason Fries", "Jen Hicks", "Scott Delp", "Helen Bronte-Stewart", "Chris Re" ]
Using modern deep learning models to make predictions on time series data from wearable sensors generally requires large amounts of labeled data. However, labeling these large datasets can be both cumbersome and costly. In this paper, we apply weak supervision to time series data, and programmatically label a dataset from sensors worn by patients with Parkinson's. We then built a LSTM model that predicts when these patients exhibit clinically relevant freezing behavior (inability to make effective forward stepping). We show that (1) when our model is trained using patient-specific data (prior sensor sessions), we come within 9% AUROC of a model trained using hand-labeled data and (2) when we assume no prior observations of subjects, our weakly supervised model matched performance with hand-labeled data. These results demonstrate that weak supervision may help reduce the need to painstakingly hand label time series training data.
[ "wearable", "sensors", "weak supervision", "time series", "Parkinsons" ]
Reject
https://openreview.net/pdf?id=SJedYj5ruV
https://openreview.net/forum?id=SJedYj5ruV
ICLR.cc/2019/Workshop/LLD
2019
{ "note_id": [ "H1xthT3zcE", "BJlokjiwYN", "Hyg7XVNUYN" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1555381681396, 1554655971509, 1554560027079 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/LLD/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/LLD/Paper60/AnonReviewer2" ], [ "ICLR.cc/2019/Workshop/LLD/Paper60/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Reject\", \"comment\": \"Reviewers found issues with novelty and clarity\"}", "{\"title\": \"Experimental weaknesses and not novel enough\", \"review\": \"* Paper summary:\\n\\nThis paper proposes to apply weak supervision to a wearable time-series classification problem.\\n\\nWeak supervision consists in using a collection of heuristic labeling functions to build a generative function of labels. A classifier is then trained using labels generated by this probabilistic model. This is definitely appealing when no human labeler is available.\\n\\nThis method is applied to a time-series classification problem relying on sensor data, with a bidirectional LSTM model. Depending on the considered setting, the weakly-supervised model achieves a performance which is significantly lower or slightly higher than a strongly-supervised model (trained with the true labels).\\n\\n* Decision\\nThis paper suffers from experimental weaknesses and is not novel enough in terms of ideas to reach acceptance.\\n\\n* General remarks:\\n- Major experimental weakness: the authors only compare the weakly supervised model to the strongly supervised one and to the *individual* labeling functions (which perform quite poorly individually). However, due to the very construction of weak supervision, a probabilistic *ensemble model* is constructed during weak supervision. The performance of this simple ensemble of heuristic functions is not reported by the authors. Thereby, it is impossible to distinguish the contribution of the weak supervision from the simple ensembling of heuristic functions, without any learning. The claim of the authors is therefore not supported by their experimental apparatus.\\n- The paper is quite well written and easy to follow at high level, despite some imprecisions.\\n- The methodology, task and dataset are not presented rigorously enough in section 3. (Cf detailed remarks)\\n\\n\\n* Detailed questions\\n1. What is exactly the classification task to be performed? For instance, in 3.2, x is first defined to have the length of a single gait cycle, but is then extended with windows before and after it.\\n2. What is the total number of samples? The authors claim in the conclusion that it is \\u00ab\\u00a0small\\u00a0\\u00bb. A back of the envelope computation given numbers in Figure 1 leads me to think that each sample consists of a 2s window, plus ~3 s around it, so 5s in total. Sampling rate is 128Hz and there are two channels, so each channel has dimension ~1300. The recording shown in Figure1 goes until 350s, so 350/5~70 samples / recording, with 9 patients and 36 trials per patient this leads to ~2300 recordings. This is not a \\u00ab\\u00a0small data\\u00a0\\u00bb problem.\\n3. What value of the correlations C is used in the actual labeling functions?\\n4. What kind of padding is used for the LSTM model?\", \"rating\": \"1: Strong rejection\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Interesting application, despite lacking important details\", \"review\": \"This paper proposes a two-tier machine learning system, combining a generative model and a discriminative model. The discriminative model is an LSTM, and the generative model is a \\\"label model\\\", which learns a labeling function based on domain-specific knowledge and empirical observations. The authors report successful experiments on wearable sensor data, which is an interesting and under-studied application domain of representation learning.\\n\\nThe introduction and discussion of prior literature is well written. However, the paper lacks a precise description of its main contribution, and how this contribution sets it apart from prior literature.\\n\\nMy other piece of criticism is that the description of the entire system is not summarized by a single diagram or algorithm listing. I had to jump back and forth between pages 2 and 3 to understand what the paper was about. Furthermore, calling the label model a generative model may be confusing to the reader. This choice of terminology may perhaps be appropriate, but in any case, the authors should strive for the maximum level of clarity. In the case of wearable sensor data, it is not clear what the end goal of the LSTM is: is it to predict freezing? Likewise, it is not clear how the label model manages to approximate domain-specific labeling functions.\", \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}" ] }
SJePKo5HdV
Learnability for the Information Bottleneck
[ "Tailin Wu", "Ian Fischer", "Isaac Chuang", "Max Tegmark" ]
Compressed representations generalize better (Shamir et al., 2010), which may be crucial when learning from limited or noisy labeled data. The Information Bottleneck (IB) method (Tishby et al. (2000)) provides an insightful and principled approach for balancing compression and prediction in representation learning. The IB objective I(X; Z) − βI(Y ; Z) employs a Lagrange multiplier β to tune this trade-off. However, there is little theoretical guidance for how to select β. There is also a lack of theoretical understanding about the relationship between β, the dataset, model capacity, and learnability. In this work, we show that if β is improperly chosen, learning cannot happen: the trivial representation P(Z|X) = P(Z) becomes the global minimum of the IB objective. We show how this can be avoided, by identifying a sharp phase transition between the unlearnable and the learnable which arises as β varies. This phase transition defines the concept of IB-Learnability. We prove several sufficient conditions for IB-Learnability, providing theoretical guidance for selecting β. We further show that IB-learnability is determined by the largest confident, typical, and imbalanced subset of the training examples. We give a practical algorithm to estimate the minimum β for a given dataset. We test our theoretical results on synthetic datasets, MNIST, and CIFAR10 with noisy labels, and make the surprising observation that accuracy may be non-monotonic in β.
[ "representation learning", "learnability", "information bottleneck" ]
Accept
https://openreview.net/pdf?id=SJePKo5HdV
https://openreview.net/forum?id=SJePKo5HdV
ICLR.cc/2019/Workshop/LLD
2019
{ "note_id": [ "BkxkqV26tE", "SJxPtBipKE", "Hylb4teTYE" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1555051655014, 1555047807183, 1555003688568 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/LLD/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/LLD/Paper59/AnonReviewer2" ], [ "ICLR.cc/2019/Workshop/LLD/Paper59/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\"}", "{\"title\": \"Learnability for the Information Bottleneck\", \"review\": \"The information-bottleneck (IB) framework of Tishby and coworkers proposes to learn a compression / representation of the data (x, y) which captures as little information as possible about the input x but as much information as possible about the prediction target y. To this end, it proposes to minimize a functional of the form\\n\\nmin I(X; Z) - beta * I(Y; Z), ... (IB)\\n\\nw.r.t to the conditional distribution P(z|x). It is known that the choice of the balancing parameter beta > 0 is crucial. In particular, if beta <= 1, then the global optimum of the above problem is an uninformative distribution P(z|x) = P(z), which discards the input x.\", \"the_current_paper_pushes_the_analysis_further_by_proposing_a_new_notion_of_learnability_called_ib_learnability\": \"a problem is IB learnable at rank beta iff the stationary point* P(z|x) = P(z) is not a global optimum of problem (IB) above. The authors then go on to produce sufficient conditions under which a problem is learnable. Also, efficient heuristics are proposed for computing / checking these sufficient learnability conditions.\\n\\nThough I didn't check the proofs (> 16 pages), the paper seems well-grounded from a formal perspective. Also, a rich array of experiments on both synthetic and real data (MNIST, CIFAR10, etc.) are presented and discussed.\\n\\nMy only worry is that the paper might be slightly out of the scope of what the LLD workshop is concerned with. That notwithstanding, I think the paper should be accepted and discussed at the workshop.\\n\\n*A side result obtained by the paper is that for any beta, P(z|x) = p(z) is a stationary point for problem (IB).\", \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Interesting analysis of learnability in IB with surprising conclusions\", \"review\": \"The paper analyses learnability of the IB regulariser. It is well-written and the results are interesting because the non-monotonic behavior is not something one would intuitively assume to appear. I definitely recommend acceptance of the paper, even though it may be slightly out of scope for the workshop due to its lack of experiments on limited data.\", \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}" ] }
r1gPtjcH_N
Improving Sample Complexity with Observational Supervision
[ "Khaled Saab", "Jared Dunnmon", "Alexander Ratner", "Daniel Rubin", "Christopher Re" ]
Supervised machine learning models for high-value computer vision applications such as medical image classification often require large datasets labeled by domain experts, which are slow to collect, expensive to maintain, and static with respect to changes in the data distribution. In this context, we assess the utility of observational supervision, where we take advantage of passively-collected signals such as eye tracking or “gaze” data, to reduce the amount of hand-labeled data needed for model training. Specifically, we leverage gaze information to directly supervise a visual attention layer by penalizing disagreement between the spatial regions the human labeler looked at the longest and those that most heavily influence model output. We present evidence that constraining the model in this way can reduce the number of labeled examples required to achieve a given performance level by as much as 50%, and that gaze information is most helpful on more difficult tasks.
[ "observational supervision", "eye tracking", "gaze data", "limited labeled data" ]
Accept
https://openreview.net/pdf?id=r1gPtjcH_N
https://openreview.net/forum?id=r1gPtjcH_N
ICLR.cc/2019/Workshop/LLD
2019
{ "note_id": [ "r1g5kgNYKV", "Skg_bLZFtE", "SyxHVDhDF4" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1554755553791, 1554744831950, 1554659116525 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/LLD/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/LLD/Paper58/AnonReviewer2" ], [ "ICLR.cc/2019/Workshop/LLD/Paper58/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\"}", "{\"title\": \"Simple idea to incorporate gaze data into standard CNN architectures for image classification.\", \"review\": [\"This paper presents a simple method to incorporate gaze signals into standard CNNs for image classification, adding an extra term in the loss function. The term is based in the difference between the Class Activation Map obtained from the model, and the human map constructed using the eye tracking information. The authors apply their method to the POET dataset and report interesting results when using different sizes for the training set. They show that the gazed network achieved equivalent performance to that of a standard CNN using less training data for intermediate data regimes.\", \"The paper well written. It presents a simple idea which has a lot of potential, specially in the context of medical data (as suggested by the authors in their planned future works). Some comments I would like to see in the camera ready version of this work:\", \"It is not clear how the human attention map is constructed. The authors just say that this is obtained by \\u201cintegrating the eye tracking signal in time\\u201d. Since this is a crucial element in their framework, I would like to see a detailed description of how this is obtained. If space constraint is a problem, you could just add an appendix section with this info.\", \"In the orange line in Figure 2 (the line associated to the standard CNN) I do not see the std. This value is reported in table 2 (Appendix B), so I guess this can be a problem related to image transparency. Please, fix this problem so that we can see the confidence interval for the standard CNN as we can do with the gazed CNN.\", \"Do you think this idea could be also useful to improve image segmentation based on CNNs with limited data?\"], \"minor_corrections\": [\"In Section 4: \\u201cTo test the this hypothesis\\u201d should be \\u201cTo test this hypothesis\\u201d.\"], \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Interesting idea with encouraging initial results\", \"review\": \"This work proposed to use gaze information in order to reduce the sample complexity of a model and the needed labeling effort to get a target performance. The proposed method uses an attention layer and adds a penalty to reduce the gap between downsampled human attention maps and class activation maps. The experimental results show an improvement especially for middle sized samples, and a higher effect for harder tasks.\\n\\nThe idea and method is overall interesting, and would be interesting to discuss. One direction to build on this work, other than extending the experiments to more complex, larger scale applications, is to try to leverage more information from the human attention maps, as downsampling throws away some of the information that can be interesting for training.\", \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}" ] }
HkxHFj5BdV
Parallel Recurrent Data Augmentation for GAN training with Limited and Diverse Data
[ "Boli Fang", "Miao Jiang" ]
The need for large amounts of training image data with clearly defined features is a major obstacle to applying generative adversarial networks(GAN) on image generation where training data is limited but diverse, since insufficient latent feature representation in the already scarce data often leads to instability and mode collapse during GAN training. To overcome the hurdle of limited data when applying GAN to limited datasets, we propose in this paper the strategy of \textit{parallel recurrent data augmentation}, where the GAN model progressively enriches its training set with sample images constructed from GANs trained in parallel at consecutive training epochs. Experiments on a variety of small yet diverse datasets demonstrate that our method, with little model-specific considerations, produces images of better quality as compared to the images generated without such strategy. The source code and generated images of this paper will be made public after review.
[ "GAN training", "Data Augmentation" ]
Reject
https://openreview.net/pdf?id=HkxHFj5BdV
https://openreview.net/forum?id=HkxHFj5BdV
ICLR.cc/2019/Workshop/LLD
2019
{ "note_id": [ "rkllb8JYFN", "rkgZWGHHFN", "HJg2YfANYE" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1554736631853, 1554498041420, 1554469507985 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/LLD/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/LLD/Paper57/AnonReviewer2" ], [ "ICLR.cc/2019/Workshop/LLD/Paper57/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Reject\"}", "{\"title\": \"An interesting idea, but needs some polish\", \"review\": [\"Summary:\", \"The authors propose a method for improving the image generation quality of GANs by (1) augmenting the training set with noise-enriched samples and (2) running multiple GANs in parallel over different subsets of the data and periodically augmenting their respective trainings with images sampled from the other GANs.\", \"I find the second contribution more compelling than the first. It is a clever augmentation strategy that does not require human intervention. Overall, however, the presentation hinders the conveyance of the idea enough that I don't think this work is ready yet for release. With a little more polish clarifying the ideas and cleaning up the writing/figures, I think it could be an excellent submission to another workshop or conference down the road.\", \"It is unclear how much of the reported gains are due to the first contribution vs the second.\", \"The first sentence of the abstract introduces too much too quickly.\", \"I'm not sure what point you're trying to make with the comment about \\\"demonstrated tendencies of misrepresentation\\\"\", \"There are many issues with the writing: spacing issues (especially around parentheses), and minor grammar oddities\", \"Missing related work in your mention of automatic augmentation: TANDA (Ratner 2017);\", \"It is unclear to me why rotation and mirroring may lead to information loss, as you claim\", \"The text in Figure 2 is very hard to read\", \"The images in Figures 3-5 are too small to tell anything about. Use fewer images per block so they can be larger.\", \"Can the authors comment at all on why each dataset seems to have a different optimal number of times to be augmented?\", \"It seems to me that the parallel GAN setup will have memory and compute requirements of approximately 8x (if the 8 GANs are being run in parallel). This is pretty substantial overhead. Are there any details I'm missing that would mitigate that?\"], \"rating\": \"2: Marginally below acceptance threshold\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}", "{\"title\": \"Too poor technical contribution to the LLD field\", \"review\": \"The authors presented what they called a \\\"parallel recurrent data augmentation\\\" technique for increasing the training images available for learning GANs. The method consists in introducing random noise to input images in a similar to K-fold cross-validation fashion: K GANs are trained simultaneously using different portions of the original training data, and the images generated by the K-th generator are perturbed with random noise and included in the training subsets of the other K-1 GANs. The method is evaluated on a simulated limited data experiment using CIFAR-10 images, showing improvements in terms on standard evaluation metrics in generative modelling.\\n\\nIn my opinion, the technical contribution is too weak for accepting the paper. Introducing random noise for data augmentation is standard not only in the GAN literature but also in deep learning in general. \\n\\nThe idea of feeding multiple generators with the artificial outputs of a surrogate models is perhaps novel but its impact is limited and it is not properly explored in this submission. The quantitative contribution in the results cannot be attributed to this technique, also, as it could be maybe just the consequence of adding noise to the inputs. In this sense, the paper lacks a comparison with respect to feeding the generators with noisy images from the training set. \\n\\nIt would be also important to show if the parallel strategy has some contribution to avoid standard GANs issues such as mode collapse.\\n\\nThe data set used for evaluation (CIFAR-10) has a significantly low resolution, and therefore it is difficult to see if the semantics of the images are actually preserved. The article would benefit from including either a larger resolution data set or at least an experiment showing e.g. that the classification performance of a DNN trained for CIFAR-10 image classification using only artificial samples doesn't differ too much from a similar network trained on a real CIFAR-10 sample.\\n\\nThe article also suffers from some presentation issues. The related works section describes GANs research and data augmentation, but without focusing on the problem that is intended to be solved with the propose tool (learning GANs with limited data). Section 2 should be reorganized to include important citations regarding this issue and the existing alternatives to solve it.\", \"other_minor_comments\": [\"There are spaces missing between words and parenthesis, specially with most of the citations.\", \"The first sentence in the abstract is too long and redundant.\", \"Second paragraph in Section 2 suffers from many repetitions of the word \\\"research\\\".\", \"Figure 2 is unreadable due to poor resolution.\"], \"rating\": \"1: Strong rejection\", \"confidence\": \"3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
rylVYjqHdN
A Pseudo-Label Method for Coarse-to-Fine Multi-Label Learning with Limited Supervision
[ "Cheng-Yu Hsieh", "Miao Xu", "Gang Niu", "Hsuan-Tien Lin", "Masashi Sugiyama" ]
The goal of multi-label learning (MLL) is to associate a given instance with its relevant labels from a set of concepts. Previous works of MLL mainly focused on the setting where the concept set is assumed to be fixed, while many real-world applications require introducing new concepts into the set to meet new demands. One common need is to refine the original coarse concepts and split them into finer-grained ones, where the refinement process typically begins with limited labeled data for the finer-grained concepts. To address the need, we propose a special weakly supervised MLL problem that not only focuses on the situation of limited fine-grained supervision but also leverages the hierarchical relationship between the coarse concepts and the fine-grained ones. The problem can be reduced to a multi-label version of negative-unlabeled learning problem using the hierarchical relationship. We tackle the reduced problem with a meta-learning approach that learns to assign pseudo-labels to the unlabeled entries. Experimental results demonstrate that our proposed method is able to assign accurate pseudo-labels, and in turn achieves superior classification performance when compared with other existing methods.
[ "Multi-Label Learning", "Weakly-Supervised Learning", "Pseudo-Labels", "Meta-Learning" ]
Accept
https://openreview.net/pdf?id=rylVYjqHdN
https://openreview.net/forum?id=rylVYjqHdN
ICLR.cc/2019/Workshop/LLD
2019
{ "note_id": [ "Hye_n0hG9V", "Sylgq-GuKV", "SkeUhEJVtV" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1555381935820, 1554682247781, 1554408621534 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/LLD/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/LLD/Paper56/AnonReviewer1" ], [ "ICLR.cc/2019/Workshop/LLD/Paper56/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\", \"comment\": \"The paper has some relatively minor issue but an overall interesting concept\"}", "{\"title\": \"This paper investigated an interesting setting, where the coarse labels are abundant and fine-grained labels are scarce.\", \"review\": \"The paper proposed to minimize the discrepancy between inferred fine-grained labels and given coarse labels, by assigning each training example a fine-grained pseudo label, which agrees with the direction of gradient descent. The experiment shows improvement over existing methods.\\n\\nHowever, the paper didn't cover the details of the model, which makes their claim less convincing. E.g. does it use hierarchical classification or totally ignore the hierarchical structure in prediction? Is the loss of true labels and pseudo labels weighted?\\n\\nAlso, I think the method of assigning pseudo labels is not quite stable. How about picking the most confident examples in each round, just like self training? It's interesting to see comparison of these methods.\\n\\nAnyway, as the method and setting are quite novel, I'd recommend acceptance of this paper.\", \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}", "{\"title\": \"Review: A Pseudo-Label Method for Coarse-to-Fine Multi-Label Learning with Limited Supervision\", \"review\": \"The authors propose and implement a new meta-learning approach for multi-label classification where the labels are structured in a two-level hierarchy (i.e. if an example has a label y, it has the label y* where y* is any ancestor of y in the hierarchy).\\n\\nThe text in Figure 2 is very difficult to see since the font size is so small.\\n\\nI found the use of \\\"relevance\\\" describe a label of y=1 to be a little confusing. Why not use \\\"membership\\\", e.g. y=1 if the given instance belongs to the particular class?\\n\\nThe font in Table 1 is also a little small.\\n\\nIn equation 2, if (i,j) \\\\notin \\\\Omega, then what does [Y^psuedo]_i,j = {0, 1} mean? [Y^psuedo]_i,j isn't set-valued, is it?\\n\\nIn equation 7, the outermost parenthesis should be made larger to more appropriately fit its content. In addition, the sign function typically returns a value in {-1, 0, 1}, but your labels are in {0, 1}.\\n\\nHow did you obtain the feature vector from the examples in MS COCO? What is the classifier you used?\\n\\nFigure 4 is a bit difficult to read due to the dual y-axes. Perhaps splitting it into two figures would make it easier to understand?\\n\\nSome grammatical errors detract from the reader's ability to quickly understand the content. For example, there are a number of nouns with missing determiners (\\\"the\\\", \\\"a\\\", etc). Some verbs do not agree with the grammatical number of the subject noun. A few minor spelling errors: \\\"decent\\\" vs \\\"descent\\\", \\\"performances\\\" vs \\\"performance\\\", \\\"recover loss\\\" vs \\\"recovery loss\\\".\", \"rating\": \"2: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}" ] }
rygEej0Eu4
Data Interpolating Prediction: Alternative Interpretation of Mixup
[ "Takuya Shimada", "Shoichiro Yamaguchi", "Kohei Hayashi", "Sosuke Kobayashi" ]
Data augmentation by mixing samples, such as Mixup, has widely been used typically for classification tasks. However, this strategy is not always effective due to the gap between augmented samples during training and clean samples. This gap may prevent a classifier from learning the optimal decision boundary and increases the generalization error. To overcome this problem, we propose an alternative framework called Data Interpolating Prediction (DIP). Unlike common data augmentations, we encapsulate the sample-mixing process in the hypothesis class of a classifier so that train and test samples are treated equally. We derive the generalization bound and show that DIP reduces the original Rademacher complexity. Also, we empirically demonstrate that DIP can outperform existing Mixup.
[ "data augmentation", "generalization", "image classification" ]
Accept
https://openreview.net/pdf?id=rygEej0Eu4
https://openreview.net/forum?id=rygEej0Eu4
ICLR.cc/2019/Workshop/LLD
2019
{ "note_id": [ "ryxCCRhz9V", "BygktHtDY4" ], "note_type": [ "decision", "official_review" ], "note_created": [ 1555381973935, 1554646390830 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/LLD/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/LLD/Paper55/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\"}", "{\"title\": \"A somewhat original approach, but insufficient experiments in order to draw conclusions regarding its value\", \"review\": \"The authors propose an alternative framework for applying the mixup method of [2]. To the best of my knowledge, this framework has not been propose before, hence the manuscript does offer some original ideas. That being said, the proposed framework is very related to established techniques, in particular to the practice of taking the average of the output of the classifier over a set of augmented version of a testing sample. The main practical contribution of the manuscript is the application of this technique: (a) for mixup and (b) at training time, too. On the downside, the reported results offer inconclusive evidence that this monte carlo sampling in training time can be a beneficial approach for classification. Generally, my main criticism is that I think that the experimental setup should have covered additional configuration combinations, so that the proposed framework is evaluated against suitable alternatives (details follow in the \\\"Cons\\\" section). In terms of clarity, I feel that the main ideas could have been presented briefly in a simpler language, before delving into the mathematical analysis.\", \"prons\": \"1. In addition to their experiments in CIFAR10 and CIFAR100, the authors briefly discuss the results on a simple synthetic dataset, which IMO illustrates nicely some potential dangers with mixup and it provides a rather clear motivation for this line of work.\\n\\n2. Derivation of some theoretical results, which are interesting in their own right.\", \"cons\": \"IMO, there are quite a few methodological shortcomings:\\n\\n1. The authors offer no justification of their choice of S=500 at testing time. I would expect a posteriori plot of how the testing accuracy changes for different values of S_testing given at least one training setting. As a related note when it comes to clarity, I assume that this sampling involves 500 cases from the training set, even at testing time. This is not made clear in the manuscript.\\n\\n2. The main technique used by the paper (averaging the output of the classifier over a set of samples at both training and testing time) is rather orthogonal to mixup, to my understanding at least. In fact, the practice of generating augmented version of a given testing sample and taking the average of the output of the classifier over all the augmented versions is a well-known practice that has been shown to improve the testing performance in many practical scenarios (as an example, [1]). Therefore, I think that this method should have been applied to and compared with other augmentation methods, in addition to mixup. For example, what happens if the hypothesis classes is enriched with augmented versions of the same sample using more standard transformations (rotations, translations, etc) ?\\n\\n3. It is my understanding that S=1 corresponds more or less to standard mixup, at training time at least. Thus, the difference of the configuration \\\"Mixup (alpha=1)\\\" and \\\"DIP(alpha=1, S=1, label-mixing)\\\" is only at testing time, with the sampling over the 500 mixup pairs that takes place in the second case. Since the experiments show that S=1 is the most performant configuration in most experiments, the value of the proposed monte carlo sampling at training time is not fully supported by the reported results, IMO. This should be discussed in the \\\"Conclusion\\\" section of the manuscript.\\n\\n4. Dropout should have been used in the no-mixup baseline, as in the mixup paper [2].\\n\\n5. I have the feeling that this work is only partially relevant to the scope of the workshop.\", \"nitpicking\": \"I would like to mention a specific orthographic error that should be corrected: \\\"monte carlo\\\" is spelled \\\"monte calro\\\" for most of the manuscript.\\n\\n[1] Jamaludin, Amir, Timor Kadir, and Andrew Zisserman. \\\"SpineNet: Automated classification and evidence visualization in spinal MRIs.\\\" Medical image analysis 41 (2017): 63-73.\\n\\n[2] Zhang, Hongyi, et al. \\\"mixup: Beyond empirical risk minimization.\\\" arXiv preprint arXiv:1710.09412 (2017).\", \"rating\": \"3: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}" ] }
r1lQeoCVu4
Data for free: Fewer-shot algorithm learning with parametricity data augmentation
[ "Owen Lewis", "Katherine Hermann" ]
We address the problem of teaching an RNN to approximate list-processing algorithms given a small number of input-output training examples. Our approach is to generalize the idea of parametricity from programming language theory to formulate a semantic property that distinguishes common algorithms from arbitrary non-algorithmic functions. This characterization leads naturally to a learned data augmentation scheme that encourages RNNs to learn algorithmic behavior and enables small-sample learning in a variety of list-processing tasks.
[ "data augmentation", "algorithm learning", "RNN" ]
Accept
https://openreview.net/pdf?id=r1lQeoCVu4
https://openreview.net/forum?id=r1lQeoCVu4
ICLR.cc/2019/Workshop/LLD
2019
{ "note_id": [ "SyeTrWQM94", "SJlrvdkW5V", "B1xU3LADtV" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1555341637208, 1555261533204, 1554667182332 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/LLD/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/LLD/Paper54/AnonReviewer1" ], [ "ICLR.cc/2019/Workshop/LLD/Paper54/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\"}", "{\"title\": \"Review\", \"review\": \"The paper tackles the problem of teaching an RNNs to approximate list-processing algorithms. The authors argue that what distinguishes algorithms from arbitrary functions is that they commute with a family of element-wise changes to their inputs and propose a method that learns RNN functions to approximate such family from data.\\n\\nTo learn commuting functions, the authors propose to synthetically generate labeled data by testing whether a function commutes with a collection of swaps. The corresponding classifier that approximates commutative swap functions is used for data augmentation. I find the observation about the parametricity property from type theory is interesting and the proposed data augmentation approach seems novel and interesting. \\n\\nThe only concern I have (perhaps, stemming from a mild misunderstanding of the method), if the proposed approach would work beyond the simple inputs of integer sequences (i.e., with more complex input-output data types, such as images, text, sounds, etc. as most of the modern machine learning has to deal with), or there are potential limitations that need to be resolved.\", \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}", "{\"title\": \"Interesting Paper\", \"review\": \"\", \"notes\": \"-Problem of teaching an RNN to approximate list processing algorithms from a small number of examples. \\n\\n -Learned data augmentation scheme based on parametricity from PL theory. \\n\\n -Helps with small-sample learning. \\n\\n -Algorithmic problems seem to require more training data than should be necessary. \\n\\n -List processing algorithms map from an input list to an output list. \\n\\n -Many algorithms are distinguished from arbitrary functions is that they commute over elementwise changes to the inputs. \\n\\n -Commutative means applying a function to each element of the list and running the list processor \\n\\n -While I like the basic idea that there are logical properties that algorithms should obey, the commutative property for arbitrary functions seems too strong to me. To give one example: multiplying by -1 does not commute with list sorting. (note: the paper addresses this later).\", \"summary\": \"This is a very good workshop paper which presents a simple idea\", \"some_ideas_for_future_work_and_directions\": \"-It might be interesting to consider enforcing this type of commutative property in the hidden states. It in some ways would require reversing your way of thinking - because you'd need to think about what properties the hidden states would need to have to allow the sequential part to be commutative, but it would have a big advantage that it would require a less data-dependent way of deciding if the function commutes or not. \\n\\n -There is a technique called Mixup (Zhang 2018) as well as Interpolation Consistency Training (Verma 2019) which tries to encourage linear combination of input examples to map to the corresponding interpolations of the outputs. If you write the interpolation as a function x, you can rewrite this as: mix(f(x)) = f(mix(x)), where f is the neural network (this is literally what (Verma 2019) enforces and (Zhang 2018) does something slightly different). Thus it is encouraging interpolations to commute with the neural network function.\", \"rating\": \"5: Top 15% of accepted papers, strong accept\", \"confidence\": \"3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
BJeMeiCVd4
Efficient Off-Policy Meta-Reinforcement Learning via Probabilistic Context Variables
[ "Kate Rakelly*", "Aurick Zhou*", "Deirdre Quillen", "Chelsea Finn", "Sergey Levine" ]
Deep reinforcement learning algorithms require large amounts of experience to learn an individual task. While in principle meta-reinforcement learning (meta-RL) algorithms enable agents to learn new skills from small amounts of experience, several major challenges preclude their practicality. Current methods rely heavily on on-policy experience, limiting their sample efficiency. They also lack mechanisms to reason about task uncertainty when adapting to new tasks, limiting their effectiveness in sparse reward problems. In this paper, we address these challenges by developing an off-policy meta-RL algorithm that disentangles task inference and control. In our approach, we perform online probabilistic filtering of latent task variables to infer how to solve a new task from small amounts of experience. This probabilistic interpretation enables posterior sampling for structured and efficient exploration. We demonstrate how to integrate these task variables with off-policy RL algorithms to achieve both meta-training and adaptation efficiency. Our method outperforms prior algorithms in sample efficiency by 20-100X as well as in asymptotic performance on several meta-RL benchmarks.
[ "meta-reinforcement learning" ]
Accept
https://openreview.net/pdf?id=BJeMeiCVd4
https://openreview.net/forum?id=BJeMeiCVd4
ICLR.cc/2019/Workshop/LLD
2019
{ "note_id": [ "BJgazJpzcE", "ByggwiEAYN", "B1x-ykU9FN" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1555382036971, 1555086167790, 1554829017361 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/LLD/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/LLD/Paper53/AnonReviewer1" ], [ "ICLR.cc/2019/Workshop/LLD/Paper53/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\"}", "{\"title\": \"Interesting work, re-wraped from the original version a bit too fast\", \"review\": \"This paper tackles the problem of meta-learning, more precisely the possibility of considering off-policies. For this, a first network estimates, from the history on the current task (the \\\"context\\\"), which kind of task it is (represented by a variable 'z'), and a second network (in a standard actor-critic approach) learns the policy for the current task, thanks to a conditioning on this 'z' (which allows to perform transfer learning between similar tasks within the same network).\\n\\nExperiments are performed on a standard problem (MuJoCo) and significant gains are observed (from 20 to 100 fewer examples needed for training).\\nWhich justifies the submission to this workshop on Limited Labeled Data, even though the primary objective of the article is not really to deal with a given dataset of limited data, but rather to target sampling data efficiency, as the article is about RL tasks.\\n\\nOverall, the paper is very good, with very interesting ideas; however it is clearly a very hastily made short version of a longer paper: abstract and conclusion were removed (are fully missing), text density maybe too high for a non-specialist.\\n\\nTherefore I hesitate between \\\"very good\\\" and \\\"borderline\\\".\", \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Important Meta-RL work but unsure if it fits the theme of the workshop\", \"review\": \"Note: While I really like the paper, I am not sure if it aligns with the workshop theme. The paper itself mentions \\\"Under review at the Workshop on \\u201cStructure & Priors in Reinforcement Learning\\u201d at ICLR 2019\\\" which I feel might be a better venue for this paper. I will let the organizers take a call and provide an objective assessment of the paper below without the workshop theme in consideration.\\n\\nThis paper proposes using off-policy RL during the meta-training time to greatly improve sample efficiency of Meta-RL methods. \\n\\nCurrent meta-RL methods (MAML/ProMP) are terribly sample inefficient since the adaptation phase during training is trained using vanilla policy gradients with a linear feature baseline (which is known to be very sample inefficient). A natural next step would have been to use a learned state dependent baseline (analogous to actor-critic methods) by meta-learning a critic. The authors take a step further and use an off-policy RL method (SAC) which has shown to be better performing than on-policy actor critic methods. The policy is also conditioned on a task embedding, enabling better adaptation. \\n\\nIf the authors could clarify the following points, I think it would make the paper much better:\\n* It's not clear if this method could be directly extendible to partially-observed settings since the factorization of the inference network would no longer hold. If being restricted to fully observed domains is a limitation, it should be noted clearly in the paper.\\n* Ablations dissecting which of the two main components of the paper (inference network / using SAC) helped the performance most would be helpful, right now the relative importance of each component is unclear. Relatedly, the authors could also include a baseline a MAML/ProMP baseline where the critic is meta-learned. \\n* The experimental details in this version of the paper are severely limited. I realize the page length constraints but the authors could put details in the appendix.\", \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
HkxzljA4_N
Online Meta-Learning
[ "Chelsea Finn", "Aravind Rajeswaran", "Sham Kakade", "Sergey Levine" ]
A central capability of intelligent systems is the ability to continuously build upon previous experiences to speed up and enhance learning of new tasks. Two distinct research paradigms have studied this question. Meta-learning views this problem as learning a prior over model parameters that is amenable for fast adaptation on a new task, but typically assumes the set of tasks are available together as a batch. In contrast, online (regret based) learning considers a sequential setting in which problems are revealed one after the other, but conventionally train only a single model without any task-specific adaptation. This work introduces an online meta-learning setting, which merges ideas from both the aforementioned paradigms to better capture the spirit and practice of continual lifelong learning. We propose the follow the meta leader (FTML) algorithm which extends the MAML algorithm to this setting. Theoretically, this work provides an O(logT) regret guarantee for the FTML algorithm. Our experimental evaluation on three different large-scale tasks suggest that the proposed algorithm significantly outperforms alternatives based on traditional online learning approaches.
[ "meta learning", "few-shot learning", "online learning" ]
Accept
https://openreview.net/pdf?id=HkxzljA4_N
https://openreview.net/forum?id=HkxzljA4_N
ICLR.cc/2019/Workshop/LLD
2019
{ "note_id": [ "BkxkP03G9N", "H1lEWJjkcN", "Ske4TVOCY4", "ByeWbyNRt4" ], "note_type": [ "decision", "official_review", "official_review", "official_review" ], "note_created": [ 1555381847321, 1555177211885, 1555100860156, 1555083001329 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/LLD/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/LLD/Paper52/AnonReviewer4" ], [ "ICLR.cc/2019/Workshop/LLD/Paper52/AnonReviewer2" ], [ "ICLR.cc/2019/Workshop/LLD/Paper52/AnonReviewer3" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\"}", "{\"title\": \"Review of \\\"Online Meta-Learning\\\"\", \"review\": \"Summary of the paper:\\n\\nThis work proposes a \\u201cbest of both worlds approach\\u201d, by introducint an online meta-learning algorithm.\\nThe \\u201cfollow the meta leader\\u201d algorithm (and its analysis) heavily builds on the \\u201cfollow the leader\\u201d algorithm from online convex optimization, which leaves the door open for future improvements.\\nSome numerical experiments favorably comparing the approach with previous work are provided.\", \"a_few_comments_and_questions\": \"-there is a (small) typo, line 7 of section A1 page 8, in the appendix\\n-second corollary, page 11: why put a 32 in the big O notation (same comment for the proof)?\\n\\nReviewer\\u2019s assessment:\\nI found the paper to be well written. The ideas are exposed clearly and the numerical results support the approach. Since the problem tackled by this work clearly falls within the scope of the workshop, I recommend to accept this paper.\", \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"The paper seems to be sound\", \"review\": \"The paper introduces a task of online metalearning, where the agent is doing few shot learning online -- every task is seen only once.\\n\\nI am not an expert in online learning, but the paper seems to be sound. The experimentation looks thorough. The results are promising, but I would like to see some dataset closer to real world.\\n\\nA minor point -- graphs are very hard to read, please use vector images\", \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"A new paradigm for continual lifelong learning\", \"review\": \"This paper formulated a new learning paradigm that combines meta-learning and online leanring, which is more general than few-shot supervised learning paradigm. The authors proposed a FTL-fashioned algorithm (FTML) that extends MAML to the online setting. FTML achieves a regret of order O(logT) under some C^2-smoothness assumption and a \\\\mu-strongly convex loss. This logarithmic regret bound is comparable to the usual FTL algorithms in a similar setting. The provable algorithm is however not straightforwardly applicable in practice, but it is shown that a MAML-typed modification can lead to reasonable performances.\\n\\nRegarding the theoretical part, the contribution does not seem to be technically significant (I don't really have time to check the analysis so I may be wrong) but provided a first set of results to the new paradigm. One flaw is maybe that the implemented algorithm is not exactly the same as the provable one, but it is comprehensible...\\n\\nThe experimental setting, in particular the choice of baselines is reasonable and integrated since there are no real prior algorithms on this new paradigm. I somehow don't like the fact that no uncertainty are given in the figures (or at least the number of replications of each experiment could've been reported).\\n\\nOverall, the paper may lack of some self-containedness due to the page limits, but remains a sound enough paper for the workshop.\\n\\nI would vote for accept.\", \"minor_comments\": \"1. The detailed assumptions have been placed in Appendix for the sake of space constraint, but in Corollary 1 it is stated that \\\"under assumptions 1 and 2...\\\" where no clue is given on where to find assumptions 1 and 2. Well I finally found them in the appendix, but it took me a bit of time.\\n2. I personally don't like a sole subsection indexed x.1 within a whole section, well it's a personal taste...\", \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}" ] }
rkeWgsAVuE
Search-Guided, Lightly-Supervised Training of Structured Prediction Energy Networks
[ "Amirmohammad Rooshenas", "Dongxu Zhang", "Gopal Sharma", "Andrew McCallum" ]
In structured output prediction tasks, labeling ground-truth training output is often expensive. However, for many tasks, even when the true output is unknown, we can evaluate predictions using a scalar reward function, which may be easily assembled from human knowledge or non-differentiable pipelines. But searching through the entire output space to find the best output with respect to this reward function is typically intractable. In this paper, we instead use efficient truncated randomized search in this reward function to train structured prediction energy networks (SPENs), which provide efficient test-time inference using gradient-based search on a smooth, learned representation of the score landscape, and have previously yielded state-of-the-art results in structured prediction. In particular, this truncated randomized search in the reward function yields previously unknown local improvements, providing effective supervision to SPENs, avoiding their traditional need for labeled training data.
[ "light supervision", "structured prediction", "indirect supervision", "domain knowledge" ]
Accept
https://openreview.net/pdf?id=rkeWgsAVuE
https://openreview.net/forum?id=rkeWgsAVuE
ICLR.cc/2019/Workshop/LLD
2019
{ "note_id": [ "HkeXIJ6zqV", "Hyeg7HIAFE", "SJxYdaacFN" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1555382090849, 1555092759900, 1554861424936 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/LLD/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/LLD/Paper51/AnonReviewer1" ], [ "ICLR.cc/2019/Workshop/LLD/Paper51/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\"}", "{\"title\": \"Good paper on energy landscape learning\", \"review\": \"This article proposes a way to learn a task when no labels are available, but supposing a reward can be computed for each proposed output by the training algorithm (without limitation on the number of such computed rewards).\\n\\nInstead of directly formulating this as a reinforcement learning problem, one learns an energy landscape, to fit the rewards observed, as a function of the (input, proposed output) pair; then one minimizes this energy landscape with respect to the output (by gradient descent), to find the one that will supposedly lead to the best reward for that input. This approach is known as \\\"SPEN\\\" (Structured Prodection Energy Networks).\\n\\nTo the opposite of traditional SPENs, this article suggests to combine the energy estimation with a random search; that is, during training, for a given input x, after having computed the optimal output y according to the learned energy landscape, one explores randomly around y to search for solutions leading to better rewards. This information is then used to adapt the energy landscape.\\nThis approach is new, to my knowledge, and makes perfect sense. It also fits the workshop topic (no strongly labeled data).\\n\\nExperiments on a significant task show improvement other methods.\\n\\nThe paper is nicely written, easy to read, even for complete beginners, despite the 4 page limitation.\", \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Improving R-SPEN with margin-based loss\", \"review\": \"This paper studies the problem of training SPENs with indirect supervision -- human written scoring function.\\nComparing to the previous work (R-SPEN), the proposed method has a simpler sampling strategy and a slightly different loss function. \\nEmpirical performance comparison verifies the effectiveness of the proposed method. \\n\\nI think for this paper, the studied problem is interesting and the proposed solution sounds novel. \\nMy main concern is about the experiments.\\nThe implementation details and the hyper-parameter settings are not elaborated, while the relative improvement over R-SPEN is not very significant. \\nAlso, I think the training efficiency could be one potential advantage over R-SPEN, but this comparison is not conducted. \\nOverall, I think it's an interesting paper but can be further improved.\", \"rating\": \"3: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}" ] }
Bkl-xjAVOV
Efficient Receptive Field Learning by Dynamic Gaussian Structure
[ "Evan Shelhamer", "Dequan Wang", "Trevor Darrell" ]
The visual world is vast and varied, but its variations divide into structured and unstructured factors. Structured factors, such as scale and orientation, admit clear theories and efficient representation design. Unstructured factors, such as what it is that makes a cat look like a cat, are too complicated to model analytically, and so require free-form representation learning. We compose structured Gaussian filters and free-form filters, optimized end-to-end, to factorize the representation for efficient yet general learning. Our experiments on dynamic structure, in which the structured filters vary with the input, equal the accuracy of dynamic inference with more degrees of freedom while improving efficiency. (Please see https://arxiv.org/abs/1904.11487 for the full edition.)
[ "structured filtering", "dynamic inference" ]
Accept
https://openreview.net/pdf?id=Bkl-xjAVOV
https://openreview.net/forum?id=Bkl-xjAVOV
ICLR.cc/2019/Workshop/LLD
2019
{ "note_id": [ "SJlm3jtKKE", "H1xS-wIuF4", "Bkx1TgsDKN" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1554779051206, 1554700029233, 1554653367349 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/LLD/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/LLD/Paper50/AnonReviewer2" ], [ "ICLR.cc/2019/Workshop/LLD/Paper50/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\"}", "{\"title\": \"Interesting idea with nice results\", \"review\": \"This paper proposes a method that offers middle ground between deterministic structured filter representations and learnable free-form representations. The idea described in the paper amounts to learning the parameters of a fixed spatial structure (i.e., covariance matrix of a gaussian distribution), as well as the parameters of a free form filter. The combination of the two is expressed through a convolution composition. The advantage of the proposed method is the lower number of parameters needed, to achieve similar performance to e.g., deformable sampling methods; the lower number of parameters can in-turn be useful in scenaria where data is scarce.\\n\\n \\n\\nOne question I have is in regards to the \\\"static composition\\\" variant, which is not clearly described in the paper. Results using that variant are also listed only for the Cityscapes validation set.\", \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"The paper should be made more precise and conclusive\", \"review\": \"This paper proposes a structured convolution operator to model deformations of local regions of an image. The deformation field is parameterized by a Gaussian function. The advantage of this approach is that it significantly reduced the number of parameters. The result on image segmentation shows that it can achieve good accuracy and is more efficient.\\n\\nThe reviewer is unable to justify the advantage of the idea in the regime of small data samples. \\nThe paper is somewhere hard to follow, due to the imprecise expression like \\u201clearned end-to-end\\u201d, \\u201cStatic Composition\\u201d, etc. Is it to learn both parameters theta and Sigma, or just Sigma, or just theta? The result in Table 1 would be more clear if these were made precise. The formula (1), is it correct or not? Since convolution with g_Sigma involves bi-linear interpolation, the convolution does not seem to be done on the grid of I and f_theta. \\n\\nOverall, the idea is interesting, but the advantage of the approach to address small-sample problem is not conclusive enough. The experiments would be more convincing if it could be tested on other problems such as classification, detection.\", \"rating\": \"2: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}" ] }
H1gxgiA4uN
Multi-Class Few Shot Learning Task and Controllable Environment
[ "Dmitriy Serdyuk", "Negar Rostamzadeh", "Pedro Oliveira Pinheiro", "Boris Oreshkin", "Yoshua Bengio" ]
Deep learning approaches usually require a large amount of labeled data to generalize. However, humans can learn a new concept only by a few samples. One of the high cogntition human capablities is to learn several concepts at the same time. In this paper, we address the task of classifying multiple objects by seeing only a few samples from each category. To the best of authors' knowledge, there is no dataset specially designed for few-shot multiclass classification. We design a task of mutli-object few class classification and an environment for easy creating controllable datasets for this task. We demonstrate that the proposed dataset is sound using a method which is an extension of prototypical networks.
[ "few-shot", "few shot", "meta-learning", "metalearning" ]
Reject
https://openreview.net/pdf?id=H1gxgiA4uN
https://openreview.net/forum?id=H1gxgiA4uN
ICLR.cc/2019/Workshop/LLD
2019
{ "note_id": [ "S1eVgzYsFV", "SJgdOWEstE", "Byl9rif-KN" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1554907628181, 1554887024398, 1554225986434 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/LLD/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/LLD/Paper49/AnonReviewer1" ], [ "ICLR.cc/2019/Workshop/LLD/Paper49/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Reject\"}", "{\"title\": \"A new dataset and task with insufficient empirical support\", \"review\": \"In this paper, the authors defined a new task on multi-object few shot learning. For this new task, the authors presented a dataset generation pipeline as well as a baseline model for future research on this topic. In the experiments, the authors demonstrates 2 claims:\\n\\n1. The presented data generation pipeline is in a controlled environment. It is neither too hard or too easy, thus can be a good benchmark for future research purpose.\\n\\n2. The new situation for multi-object few shot learning is observably harder than single object few shot learning.\", \"comments\": \"1. Clarity: The clarity of the paper can be substantially improved. It should be written in a more self contained way. E.g. as an educated person without being an expert in this specific topic, it is not crystal clear to me how the protonet works from the description at the beginning of section 3. \\n\\n2. Significance: This paper proposed multi-object few shot learning. The major difference is the label space is exponential in the number of objects. The distinction from a single object problem with large # of classes would be the exponentially large class combination space. However in Table 1 and 2, the task is designed to have a relatively small # of total class combinations for multi objects. E.g. in table 2, the class combination space size is only 6^3 which is not too large. To this end, the validation of the new task or dataset is not convincing enough to me.\", \"3_minor\": \"Typo on repeated \\\"accuracy\\\" in the second paragraph of sec 4.3.\", \"rating\": \"2: Marginally below acceptance threshold\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}", "{\"title\": \"needs better justification for treating multiclass settings as a distinct new problem\", \"review\": \"This paper presents a new data set for few shot learning with multiple classes being introduced simultaneously in each example. It also introduces a slight variant on existing few shot architectures first simultaneously learning multiple new classes.\\n\\nI am unconvinced that the problem of multi-class few shot learning poses a substantially different challenge from single class few shot learning. The cited source, Carey & Bartlett, is a paper entirely about one class few shot learning, so I don't see any evidence that humans, at least, have a distinct method of learning multiple classes simultaneously. The claim, \\\"It is yet an active area of study to know how human are capable of doing this,\\\" clearly requires at least one example of active research. While the community would no doubt welcome a new data set for few shot learning, the burden is on the authors to explain why this particular variation on the problem poses a distinct challenge or could be useful to treat separately. \\n\\nI don't see the significant innovation present in multi-prototype networks. Please contrast them with existing methods for identifying what object in a scene might be referred to with a new label, without trying to simultaneously label multiple objects.\", \"minor_issues\": \"Please don't cite an entire textbook (Goodfellow) as a source for a specific claim about human cognition. \\n\\nThis work could benefit greatly from a confident proofreader. Pervasive English grammar and spelling errors make the paper a lot less readable.\", \"rating\": \"1: Strong rejection\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}" ] }
HyeggoCN_4
Learning To Avoid Negative Transfer in Few Shot Transfer Learning
[ "James O' Neill" ]
Many tasks in natural language understanding require learning relationships between two sequences for various tasks such as natural language inference, paraphrasing and entailment. These aforementioned tasks are similar in nature, yet they are often modeled individually. Knowledge transfer can be effective for closely related tasks, which is usually carried out using parameter transfer in neural networks. However, transferring all parameters, some of which irrelevant for a target task, can lead to sub-optimal results and can have a negative effect on performance, referred to as \textit{negative} transfer. Hence, this paper focuses on the transferability of both instances and parameters across natural language understanding tasks by proposing an ensemble-based transfer learning method in the context of few-shot learning. Our main contribution is a method for mitigating negative transfer across tasks when using neural networks, which involves dynamically bagging small recurrent neural networks trained on different subsets of the source task/s. We present a straightforward yet novel approach for incorporating these networks to a target task for few-shot learning by using a decaying parameter chosen according to the slope changes of a smoothed spline error curve at sub-intervals during training. Our proposed method show improvements over hard and soft parameter sharing transfer methods in the few-shot learning case and shows competitive performance against models that are trained given full supervision on the target task, from only few examples.
[ "few shot learning", "negative transfer", "cubic spline", "ensemble learning" ]
Reject
https://openreview.net/pdf?id=HyeggoCN_4
https://openreview.net/forum?id=HyeggoCN_4
ICLR.cc/2019/Workshop/LLD
2019
{ "note_id": [ "Bye9VUkKtV", "SyxsIA-uKV", "Hy-fabSK4" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1554736689959, 1554681427341, 1554484488516 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/LLD/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/LLD/Paper48/AnonReviewer2" ], [ "ICLR.cc/2019/Workshop/LLD/Paper48/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Reject\", \"comment\": \"Desk Reject. The paper severely exceeds the 4 page limit.\"}", "{\"title\": \"Could be interesting but in need of a rewrite\", \"review\": \"The paper investigates a method for transfer learning where the amount of transfer from one task to another is controlled by a dynamic hyperparameter.\\nEvaluation is performed by combining the SNLI, MultiNLI and Question Matching datasets in various ways using transfer learning.\\n\\nThe work seems to have some interesting ideas, but the paper is lacking in clarity and therefore it is difficult to evaluate the validity and benefit of this approach.\\n\\nMuch of the system description is confusing and difficult to follow. Some examples:\\n- In Section 2.2, acronym SN used without defining it.\\n- In Section 3, matrix A and its size is defined, but it is not explained what the values in there represent or how they are then used.\\n- Section 3.1 says calculations are made based on \\\"voting parameters\\\", which are not mentioned anywhere else in the paper.\\n- Variables \\\\theta_s and \\\\theta_t are used but not defined.\\n- GRU-1 and GRU-2 are different configurations in the results tables, but are never defined.\\n- Tables 2 and 3 use S, M and Q, which are not defined anywhere. I'm guessing these are meant to reference SNLI, MNLI and QM datasets, but explaining this in the caption would be helpful.\\n\\nOverall, the results and findings are difficult to interpret. For clarity and comparable evaluation, the results tables should contain a non-transfer baseline, a simple fine-tuning baseline, and the SOTA results, in addition to the proposed models. Also, the tables are in need of informative captions about the training conditions.\\n\\nSection 2.2 describes 3 systems from previous work that are claimed to be SOTA for the tasks that are being investigated. The results section also claims to achieve results that are comparable to SOTA. However, these models are from 2016 and 2017, and there has been quite a bit of work on NLI since then.\\nThe best reported accuracy on SNLI in the paper is 82.5%, whereas current SOTA is at 90+%\", \"https\": \"//arxiv.org/pdf/1901.11504v1.pdf\\n\\nIt is not necessary to achieve SOTA in every paper, but comparing to old models and claiming top performance is misleading.\\n\\nSection 5 says learning was done on \\\"most of the available training data (between 70%-80%). Why not use the whole training data? And why not say exactly how much training data was used?\\n\\nSection 4.2 says tuning is done on \\\"50% of X \\\\in T_s of upper layers\\\", and it is not clear what is meant by this. That tuning the upper layers was done on 50% of source examples? Why was tuning done on source examples as opposed to target examples? Why only 50%? And why not tune all layers as opposed to only the upper layers?\\n\\nThe concept of \\\"negative transfer\\\" is the main focus of the paper but not really a frequently used term in the field of machine learning, so it should be defined and explained.\\n\\nThe paper is in serious need of proof-reading, as it contains half-finished sentences, many spelling and grammatical errors, and repeated words.\", \"rating\": \"2: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"8 pages, desk reject\", \"review\": \"This paper is longer than the 4 page limit.\", \"rating\": \"1: Strong rejection\", \"confidence\": \"3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
H1xylj04_V
Siamese Capsule Networks
[ "James O' Neill" ]
Capsule Networks have shown encouraging results on \textit{defacto} benchmark computer vision datasets such as MNIST, CIFAR and smallNORB. Although, they are yet to be tested on tasks where (1) the entities detected inherently have more complex internal representations and (2) there are very few instances per class to learn from and (3) where point-wise classification is not suitable. Hence, this paper carries out experiments on face verification in both controlled and uncontrolled settings that together address these points. In doing so we introduce \textit{Siamese Capsule Networks}, a new variant that can be used for pairwise learning tasks. We find that the model improves over baselines in the few-shot learning setting, suggesting that capsule networks are efficient at learning discriminative representations when given few samples. We find that \textit{Siamese Capsule Networks} perform well against strong baselines on both pairwise learning datasets when trained using a contrastive loss with $\ell_2$-normalized capsule encoded pose features, yielding best results in the few-shot learning setting where image pairs in the test set contain unseen subjects.
[ "capsule networks", "face verification", "siamse networks", "few-shot learning", "contrastive loss" ]
Reject
https://openreview.net/pdf?id=H1xylj04_V
https://openreview.net/forum?id=H1xylj04_V
ICLR.cc/2019/Workshop/LLD
2019
{ "note_id": [ "SJgDuL1FKV", "HJlLUIkFYE" ], "note_type": [ "decision", "official_review" ], "note_created": [ 1554736750710, 1554736718145 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/LLD/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/LLD/Paper47/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Reject\", \"comment\": \"Desk Reject. The paper exceeds the 4 page limit.\"}", "{\"title\": \"Too technical and hard to follow, often written as a list.\", \"review\": \"The paper proposes to combine siamese and capsule networks. It is overall too technical and hard to follow. Notations are very unclear because they are never described once and for all with the network architecture. Quantitative results are good. This ideas and the computational work is probably interesting but the writing requires a lot more effort to be understandable by an audience beyond the specialists of capsule and siamese networks.\\n\\n1. Introduction\\nOk.\\n'local features in receptive field' => tautology\", \"typo\": \"hinton1985shap\\n\\n2. Capsule Network\\n'the brain is hypothesized to solve an inverse graphics problem' => No reference. The brain doesn't solve inverse graphics problem. Scientists use maths to model what the brain is doing.\\n\\n'length of the vector' => can be confused with the vector dimension. Use 'norm'.\\nNotations and technical explanations of capsule networks are not clear for someone that is not familiar with the framework. A figure would be helpful to understand the architecture.\\n\\nThe paragraph 'Extensions of capsule networks' is a succession of technical description of previous extensions of capsule networks. Again notations are not clear. The description is hard to follow without prior knowledge on this topic.\\n\\n3. Siamese networks\\nAgain a list of technical description of previous works. This is only understandable and interesting for a specialist.\\n\\n4. Siamese Capsule Networks\\nThis idea is to combine siamese and capsule networks. Again this is hard to follow because there is not figure to help the reader understanding what is new and how this combination is performed.\\n\\n5. Results\\nQuantitative results are good. Figure 1 is too sized properly. Figure 2 has no x,y axis...\", \"rating\": \"1: Strong rejection\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}" ] }
rkxJgoRN_V
Automatic Labeling of Data for Transfer Learning
[ "Parijat Dube", "Bishwaranjan Bhattacharjee", "Siyu Huo", "Patrick Watson", "John Kender", "Brian Belgodere" ]
Transfer learning uses trained weights from a source model as the initial weightsfor the training of a target dataset. A well chosen source with a large numberof labeled data leads to significant improvement in accuracy. We demonstrate atechnique that automatically labels large unlabeled datasets so that they can trainsource models for transfer learning. We experimentally evaluate this method, usinga baseline dataset of human-annotated ImageNet1K labels, against five variationsof this technique. We show that the performance of these automatically trainedmodels come within 17% of baseline on average.
[ "transfer learning", "fine-tuning", "divergence", "pseudo labeling", "automated labeling", "experiments" ]
Reject
https://openreview.net/pdf?id=rkxJgoRN_V
https://openreview.net/forum?id=rkxJgoRN_V
ICLR.cc/2019/Workshop/LLD
2019
{ "note_id": [ "HylRWyMfcV", "r1ea_iJZ9V", "rJlzGd5TtE" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1555336966303, 1555262325501, 1555044362295 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/LLD/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/LLD/Paper46/AnonReviewer1" ], [ "ICLR.cc/2019/Workshop/LLD/Paper46/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Review\", \"review\": \"The main idea of the paper is to algorithmically label (or pseudolabel) a large amount of image data so that the model pre-trained on these data can better transfer to some target task. The idea is to use some specialized (also pre-trained) models that can identify certain classes as soft labeling functions (i.e., at pre-training time, use the distance to the average output activations as the pre-training learning signal).\\n\\nThe idea is intuitive and seems working, but the downside of the approach is that it requires these gold-standard specialized models used for source labeling (or known labeled datasets used to construct such models). Of course, this is just a pre-training method, and the goal is to transfer the model to a target domain that has some (limited) gold-standard annotations. However, the approach seems quite heuristic, it's unclear whether such pre-training introduces any biases (during pre-training), and generally a more thorough analysis of how the quality of the known labeled data affects model pretraining is necessary.\", \"rating\": \"2: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Variations on pseudo-label generation\", \"review\": [\"This paper describes 5 methods for generating pseudo-labels for images using a pre-trained VGG net. They involve concatenating N-nearest labels, clustering, and a geometric method.\", \"The first three methods might also have been classified as three variants of N nearest neighbors with N=1,2,3\", \"The geometric method creating a maximum surface triangle doesn't appear to have any motivation\", \"An important contribution in this domain, Hsu et al, 2018, https://arxiv.org/abs/1810.02334, which performs clustering on neural network features, bearing a fair amount of similarity to at least method 4, is not mentioned in this paper\", \"The specific notion of \\\"KL divergence\\\" is not explained or stated as a formula, making the exact probability distributions over which it is computed impossible to know\", \"Minor: \\\"figure 2\\\" should be a table (but appears to be a cropped screenshot from a spreadsheet)\", \"While it is good that different types of pseudolabeling are explored and evaluated, this paper is lacking on many fronts.\"], \"rating\": \"2: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}" ] }
HyxA1i0NOE
Enhancing experimental signals in single-cell RNA-sequencing data using graph signal processing
[ "Daniel B. Burkhardt", "Jay S. Stanley III", "Ana Luisa Perdigoto", "Scott A. Gigante", "Kevan C. Herold", "Guy Wolf", "Antonio J. Giraldez", "David van Dijk", "Smita Krishnaswamy" ]
Single-cell RNA-sequencing (scRNA-seq) is a powerful tool for analyzing biological systems. However, due to biological and technical noise, quantifying the effects of multiple experimental conditions presents an analytical challenge. To overcome this challenge, we developed MELD: Manifold Enhancement of Latent Dimensions. MELD leverages tools from graph signal processing to learn a latent dimension within the data scoring the prototypicality of each datapoint with respect to experimental or control conditions. We call this dimension the Enhanced Experimental Signal (EES). MELD learns the EES by filtering the noisy categorical experimental label in the graph frequency domain to recover a smooth signal with continuous values. This method can be used to identify signature genes that vary between conditions and identify which cell types are most affected by a given perturbation. We demonstrate the advantages of MELD analysis in two biological datasets, including T-cell activation in response to antibody-coated beads and treatment of human pancreatic islet cells with interferon gamma.
[ "computational biology", "graph signal processing", "genomics" ]
Accept
https://openreview.net/pdf?id=HyxA1i0NOE
https://openreview.net/forum?id=HyxA1i0NOE
ICLR.cc/2019/Workshop/LLD
2019
{ "note_id": [ "SyxXKJaG5V", "SkecWJgtKV", "B1ghB6sPFN" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1555382138847, 1554738945753, 1554656580069 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/LLD/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/LLD/Paper45/AnonReviewer1" ], [ "ICLR.cc/2019/Workshop/LLD/Paper45/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\"}", "{\"title\": \"The paper is clear, the method is properly described and evaluated.\", \"review\": \"It is not clear how it is appropriate to the audience of the workshop because the data are properly labeled. Yet, the authors argue that there is noise in the data labeling.\\n\\n1/ Introduction\\nContext and goals are clearly described.\\n\\n2/ The MELD algorithm\\nClear.\\nThen, they introduce Vertex Frequecy Clustering but the method is not detailed.\\n\\n3/ Results\\nIt seems good yet it's hard to judge because there is no comparison with other methods.\\nAuthors used 'we' + citation which breaks double-blind review...\", \"rating\": \"3: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Weak accept\", \"review\": \"* Content:\\n\\nThis paper introduces several methods to process experimental results on biological cells. These experiments are characterized by a high experimental variability, which makes difficult to process results. The proposed MELD algorithm maps hard group assignments (e.g. treatment/control in {-1, 1}) to soft assignments (in [-1, 1]) thanks to a low-pass filtering based on a graph built using data related to each cell. This later allows the authors to cluster relevant groups of cells, leading to biological insights.\\nNote that the paper is an excerpt of the bioRxiv paper of (Burkhardt et al.) (cited in the paper).\\n\\n* Comment:\\n\\nThis paper is quite dense and makes a heavy use of technical acronyms, but it remains understandable and is well written besides that.\\n\\nMy main concern is related to the experimental validation of the method by the authors, which appears quite qualitative to me. Indeed, the authors mainly observe that the proposed method allows them to gain biological insights on some past experiments. While this is interesting from a biological perspective, from a machine learning perspective a more thorough benchmarking of MELD would have been appreciated. Maybe using a synthetic model would help understand its possible weaknesses?\\n\\nDespite this concern, I would still vote in favor of acceptance for this paper, as methods removing \\\"noise\\\" from data to increase its \\\"signal\\\" ratio would probably be of interest to the LLD community.\", \"rating\": \"3: Marginally above acceptance threshold\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}" ] }
SyxCysRNdV
Data Augmentation for Rumor Detection Using Context-Sensitive Neural Language Model With Large-Scale Credibility Corpus
[ "Sooji Han", "Jie Gao", "Fabio Ciravegna" ]
In this paper, we address the challenge of limited labeled data and class imbalance problem for machine learning-based rumor detection on social media. We present an offline data augmentation method based on semantic relatedness for rumor detection. To this end, unlabeled social media data is exploited to augment limited labeled data. A context-aware neural language model and a large credibility-focused Twitter corpus are employed to learn effective representations of rumor tweets for semantic relatedness measurement. A language model fine-tuned with the a large domain-specific corpus shows a dramatic improvement on training data augmentation for rumor detection over pretrained language models. We conduct experiments on six different real-world events based on five publicly available data sets and one augmented data set. Our experiments show that the proposed method allows us to generate a larger training data with reasonable quality via weak supervision. We present preliminary results achieved using a state-of-the-art neural network model with augmented data for rumor detection.
[ "Rumor Detection", "Data Augmentation", "Social Media", "Neural Language Models", "Weak Supervision" ]
Accept
https://openreview.net/pdf?id=SyxCysRNdV
https://openreview.net/forum?id=SyxCysRNdV
ICLR.cc/2019/Workshop/LLD
2019
{ "note_id": [ "HygV3LkFtE", "r1l0VF88F4", "BJgTv4QbtV" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1554736811789, 1554569526478, 1554228325040 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/LLD/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/LLD/Paper44/AnonReviewer1" ], [ "ICLR.cc/2019/Workshop/LLD/Paper44/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\"}", "{\"title\": \"A straight-forward data augmentation method for rumor detection employing ELMo\", \"review\": \"The authors present a data augmentation technique for rumor detection using recently introduced contextualized word representations, like ELMo. Last, they fine-tune them with diverse datasets (tweets at their majority) in order to build rumor-specific embeddings.\\n\\nThe paper is very clear and easy to comprehend. The authors present a very analytical data augmentation technique for the task of rumor detection by employing semantic relatedness fine-tuning on a large Twitter corpus that they collected. This way the effectively address the labeled data scarcity and class imbalance problems.\", \"pros\": [\"using state-of-the-art neural language models\", \"semantic relatedness fine-tuning\"], \"cons\": [\"not compared with other data augmentation techniques (other than using Kochkina's method)\", \"considerable time for collecting the data, fine-tuning, and kxn pair comparison\", \"What was the time (as well as resources) requested to fine-tune on the CREDBANK corpus, as well as the time required for the whole process? Will the data and methods be available to the public? Could other methods be used than semantic relatedness? What about involving transfer learning for similar tasks?\"], \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"A useful augmentation method, persuasively tested.\", \"review\": \"This paper addresses the problem of limited data in rumor detection. They augment data by identifying unlabeled data as paraphrases of labeled rumors based on semantic similarity. They build a rumor detection model by fine-tuning a pretrained language model.\\n\\nI recognize space is limited, but a brief explanation of the rumor detection task and specifics about the class imbalance would help.\\n\\nPreprocessing as described removes critical meta information about whether the tweet is citing a particular source (url/rt). I'm skeptical that removing this information is necessary to build a model.\\n\\nI would like to see some exploration of whether sentence cosine similarity is actually a good metric for semantic similarity. What properties are captured by cosine similarity?\\n\\nDoes this manner of augmenting data create bias towards detecting the same sort of rumors as are in the corpus? That is, will topic be relied on more than other markers of credibility? Perhaps holding out specific events from augmented training data would be a good way to test.\\n\\nThe models that serve as a test bed show a sound methodology, and this paper strikes me as a solid work in progress.\", \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}" ] }
SJxakiC4u4
Unsupervised Continual Learning and Self-Taught Associative Memory Hierarchies
[ "James Smith", "Seth Baer", "Zsolt Kira", "Constantine Dovrolis" ]
We first pose the Unsupervised Continual Learning (UCL) problem: learning salient representations from a non-stationary stream of unlabeled data in which the number of object classes varies with time. Given limited labeled data just before inference, those representations can also be associated with specific object types to perform classification. To solve the UCL problem, we propose an architecture that involves a single module, called Self-Taught Associative Memory (STAM), which loosely models the function of a cortical column in the mammalian brain. Hierarchies of STAM modules learn based on a combination of Hebbian learning, online clustering, detection of novel patterns and forgetting outliers, and top-down predictions. We illustrate the operation of STAMs in the context of learning handwritten digits in a continual manner with only 3-12 labeled examples per class. STAMs suggest a promising direction to solve the UCL problem without catastrophic forgetting.
[ "continual learning", "unsupervised learning", "online learning" ]
Accept
https://openreview.net/pdf?id=SJxakiC4u4
https://openreview.net/forum?id=SJxakiC4u4
ICLR.cc/2019/Workshop/LLD
2019
{ "note_id": [ "Byx2ylpGcE", "Skg8yoJdYN" ], "note_type": [ "decision", "official_review" ], "note_created": [ 1555382244409, 1554672350207 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/LLD/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/LLD/Paper43/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\"}", "{\"title\": \"Compelling method; some additional analysis would be helpful\", \"review\": \"Paper Summary\\n\\nThis paper proposes using hierarchies of Self-Taught Associative Memory (STAM) modules to solve the Unsupervised Continual Learning (UCL) problem, wherein salient representations must be learned from a stream of unlabeled data that can be used for classification with a small amount of labeled data provided at a later time. Importantly, the stream is assumed to be non-stationary in the sense that the number of classes in the stream varies with time and carries no associated prior that is known to the modeler. The paper describes the STAM approach and presents compelling evidence that the representations it learns are well-suited for few-shot classification when compared with reasonable baselines.\\n\\n\\nQuality (Pros)\\n(1) The UCL problem is clearly described\\n\\n(2) The STAM architecture represents an interesting method for learning a representation that takes advantage of hierarchical receptive fields in a similar manner to CNNs, but is adaptable to changing data distributions by design via the online clustering and outlier pruning steps. \\n\\n(3) The association of learned representations with different classes is accomplished in a reasonable way\\n\\n(4) Experimental results suggest that the STAM method consistently outperforms the CAE baseline \\n\\n\\nLimitations (Cons) and Questions\\n(1) Additional motivation for the UCL problem would be welcome, as well as experiments on additional datasets that would demonstrate a wider variety of use cases\\n\\n(2) It is unclear how such hyperparameters as the number of standard differences used in the novelty detector, the number of clusters chosen at each level in the hierarchy, and the allegiance value at which centroids are removed at the classification stage affect performance. How much computational effort is required to find these values? And how much does performance depend on them?\\n\\n(3) Computational efficiency is not touched upon; from a systems standpoint, what is the cost of this model relative to the CAE baseline?\\n\\n(4) I would suggest adding N = 1,000, 10,000 numbers to Figure 3 -- it is currently hard to read and contextualize. It would also help to show the difference between 1,00 and 10,000 on a single graph, perhaps, as it is difficult do see the relative changes as currently presented\\n\\n(5) A point that I found confusing was exactly how the outliers are detected in the second paragraph under equation (5). Specifically, from my reading, I was under the impression that y_{i+1,m} would be equivalent to c_i(y_{i+1,m}) except for any differences caused by averaging with other overlapping patches. Is this averaging with overlapping patches then the reason that y_{i+1,m} and c_i(y_{i+1,m}) could be different? Or have I misunderstood? Regardless, some clarifying language around this point could be helpful to the reader.\\n\\n\\nClarity\\n\\nThe presentation is generally clear and well-written, modulo the above suggestions.\\n\\n\\nSignificance\\n\\nThe STAM approach seems to be a compelling method to solve the UCL problem based on the analysis presented, and comparison to baselines seems reasonable. This method could be useful in a number of machine learning applications.\", \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}" ] }
r1gp1jRN_4
Unifying semi-supervised and robust learning by mixup
[ "Ryuichiro Hataya", "Hideki Nakayama" ]
Supervised deep learning methods require cleanly labeled large-scale datasets, but collecting such data is difficult and sometimes impossible. There exist two popular frameworks to alleviate this problem: semi-supervised learning and robust learning to label noise. Although these frameworks relax the restriction of supervised learning, they are studied independently. Hence, the training scheme that is suitable when only small cleanly-labeled data are available remains unknown. In this study, we consider learning from bi-quality data as a generalization of these studies, in which a small portion of data is cleanly labeled, and the rest is corrupt. Under this framework, we compare recent algorithms for semi-supervised and robust learning. The results suggest that semi-supervised learning outperforms robust learning with noisy labels. We also propose a training strategy for mixing mixup techniques to learn from such bi-quality data effectively.
[ "label noise", "semi-supervised learning", "robust leaning under to noisy label" ]
Accept
https://openreview.net/pdf?id=r1gp1jRN_4
https://openreview.net/forum?id=r1gp1jRN_4
ICLR.cc/2019/Workshop/LLD
2019
{ "note_id": [ "B1gtJDkYF4", "B1l2Qw2DFV", "H1xNccu5OE" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1554736864547, 1554659107686, 1553791627897 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/LLD/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/LLD/Paper42/AnonReviewer2" ], [ "ICLR.cc/2019/Workshop/LLD/Paper42/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\"}", "{\"title\": \"A simple approach for handling noisy labels, coupled with promising results\", \"review\": \"The authors start by introducing a formal setting that includes the semi-supervised and the robust learning tasks as special cases and they then proceed by proposing a strategy based on mixup [1] for training a model in this unified setting.\\n\\nMy general impression from this work is firmly positive. The manuscript is clearly written, the experimental setup covers adequately alternative methods and the description of the experiments includes the most important details. The results presented, while in my opinion do not allow drawing definite conclusions, they are at least promising. In more detail, I see the following prons and cons:\", \"prons\": \"1. Clearly written manuscript. Related work is properly presented, the proposed unifying framework is easy to understand and the experiments are described in adequate detail.\\n\\n2. The proposed learning approach in this unified setting is elegant and, to the best of my knowledge, original.\\n\\n3. The proposed method is compared adequately with alternative state-of-the-art ones and with reasonable baselines. The comparison of these state-of-the-art methods with each other in this specific setting is interesting on its own right, too.\\n\\n4. This works falls nicely within the scope of the workshop.\", \"cons\": \"1. In the reported experiments the proposed method had a performance which is very close to that of an alternative method (90% as oppose to 89% of [2]) while the difference when q=0 with when q=0.6 is also close (88% and 90% respectively). Given how close these numbers are, I would have preferred if the experiments had been repeated for more trials (say 3) and/or some statistical tests had been performed (even though it is my understanding that this is not standard practice in the conference) in order to gain some insight on the statistical significance of these differences.\\n\\n2. Of course one can always suggest more experiments, but still I think it would be interesting to see if/how the results change if some other CNN architecture is used.\\n\\n\\n[1] Zhang, Hongyi, et al. \\\"mixup: Beyond empirical risk minimization.\\\" arXiv preprint arXiv:1710.09412 (2017).\\n\\n[2] Verma, Vikas, et al. \\\"Manifold Mixup: Learning Better Representations by Interpolating Hidden States.\\\" (2018).\", \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Great workshop paper, but many areas to improve algorithm\", \"review\": \"Major comments and summary:\\n\\n Overall, I strongly recommend this paper as a workshop paper, but I think it needs more work to become a great conference paper. The problem is well-motivated and seems very important to me. My biggest issue is that I feel like one could go much \\\"further\\\" with this idea - and it shows empirically - the only improvement from using the noisy labels is to go from 89% -> 90%, which I would say is only a small improvement. If this problem gets solved I think you should be closer to 95% on CIFAR10.\", \"i_have_a_few_ideas_on_how_to_make_the_algorithm_better\": \"-On the noisily labeled data points $D_U$, replace the (noisy) label y_i with a soft-label which reflects the underlying noise process. For example if you're told that y = 1, but the noise process corrupts to a new random label with 50% chance, then make the new soft-label [0.5/9, 0.5, 0.5/9, 0.5/9, ...]. This is like changing the label randomly each time you see the example but it will have much less variance! This is just for the L_robust loss. \\n -Alternatively, what if you replace L_robust with a partial likelihood which reflects what q is? For example, if each data point is corrupted with 50% chance - then just try to encourage the probability of that point to be >=50%. Don't try to push the probability on that label up to 100% (in practice this would look like a hinge loss - I think). \\n -Is \\\"L\\\" loss cross-entropy or mean squared error? For semi-supervised consistency loss I think it helps to use mean squared error instead of cross entropy (cross-entropy is very harsh if the prediction and the label strongly disagree when the label is confident).\", \"other_comments\": \"-I think the bi-quality data setting is extremely interesting. However, the introduction could perhaps benefit from giving some more concrete examples of bi-quality data. I think model-based RL could be one interesting example (the model gives very noise rollouts and the environment gives high-quality rollouts). \\n\\n -The paper motivates the algorithm by saying that the value of q might be unknown. However, even if q were known, would it be trivial to fuse SSL and RLL? There would still be a tradeoff between how much to trust the labels from the untrustworthy set and how much to rely on the trustworthy data. \\n\\n -Explicit hyperparameter to mix between the robust objective the semi-supervised objective. Would it be too hard to learn this hyperparameter? \\n\\n -Is the value of \\\"q\\\" and the noise corruption process known? It doesn't seem to be used in the algorithm block. If q or the corruption process were known, would you have a way of using it.\", \"paper_reading_notes\": \"-Deep learning requires labeled data which is both large and clean. \\n\\n-This paper proposes to study the robust learning and SSL learning problems jointly - in that we assume that we have \\\"bi-quality data\\\" - where a small number of examples are cleanly labeled and a large amount of data has noisy labels. \\n\\n-Using just robust learning on all the data performs worse than SSL. \\n\\n-Paper proposes to combine SSL and RLL. \\n\\n-D_T is the trusted data (clean, labels always correct). D_U is the untrusted data (labels are sometimes wrong). \\n\\n-P refers to the fraction of the total data which is trusted. \\n\\n-Also have a score q, such that q=0 means the labels are totally random and q=1 means the labels are perfectly clean. \\n\\n-From this perspective SSL corresponds to the q=0 case (where the untrusted data has no label information).\", \"minor_comments\": \"-Would be good to also cite this newer paper focusing on mixup and SSL (only published recently, but spells the SSL stuff out more clearly): https://arxiv.org/pdf/1903.03825.pdf\\n\\n -\\\"To realize this goal, whose quality might be 0, we propose...\\\" -> this line doesn't make sense to me. Typo? \\n\\n -Please put more space into the algorithm 1 block.\", \"rating\": \"5: Top 15% of accepted papers, strong accept\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}" ] }
S1ghJiRVd4
Biomedical Named Entity Recognition via Reference-Set Augmented Bootstrapping
[ "Joel Mathew", "Shobeir Fakhraei", "Jose Luis Ambite" ]
We present a weakly-supervised data augmentation approach to improve Named Entity Recognition (NER) in a challenging domain: extracting biomedical entities (e.g., proteins) from the scientific literature. First, we train a neural NER (NNER) model over a small seed of fully-labeled examples. Second, we use a reference set of entity names (e.g., proteins in UniProt) to identify entity mentions with high precision, but low recall, on an unlabeled corpus. Third, we use the NNER model to assign weak labels to the corpus. Finally, we retrain our NNER model iteratively over the augmented training set, including the seed, the reference-set examples, and the weakly-labeled examples, which results in refined labels. We show empirically that this augmented bootstrapping process significantly improves NER performance, and discuss the factors impacting the efficacy of the approach.
[ "Name Entity Recognition", "Bootstrapping", "Neural Networks", "Reference Set", "Biomedicine" ]
Reject
https://openreview.net/pdf?id=S1ghJiRVd4
https://openreview.net/forum?id=S1ghJiRVd4
ICLR.cc/2019/Workshop/LLD
2019
{ "note_id": [ "SylZflpfqN", "BylO8mDPKN" ], "note_type": [ "decision", "official_review" ], "note_created": [ 1555382280874, 1554637647570 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/LLD/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/LLD/Paper41/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Experimental settings are unclear\", \"review\": \"This short paper presents a framework to train a neural Name Entity Recognition (NNER) model, from partially labelled data.\\nThe idea is to first train a NNER model on a small but fully labelled dataset (Ds), and use this model to labelled another, unlabelled, dataset (Dc). A reference set is then used to find mentions of entities in Dc. Finally, both Ds and Dc are combined to retrain the NNER model, which is then used to refine the labeling of Dc.While the second step is about data augmentation, the third and last step is about training the NNER model iteratively. Both steps are repeated K times.\\nHowever, it is not explicitly mentioned whether the NNER model is retrained from scratch, or if it is fine-tuned.\\nThe authors should give information regarding the setup of their model since they obviously change it from (Genthial, 2017).\\nWhile this part is of the paper was easy to follow, I found the section about the experiments and the discussion of the results much more confusing. The authors report results from 9 experiments in a single table, with a confusing naming, e.g.:\\n- what does \\\"+CRF\\\" mean? Does it refer to the architecture from (Genthial, 2017) with the CRF layer replaced with a softmax layer?\\n- the NNER-3%, why 3% and how did you select this subset of the Bio-ID dataset?\\n- Experiments 7 to 9, C1 and C2 are not explained, just mentioned in the caption.\\nOverall, Section 4 is really dense and hard to follow. Instead, the authors should consider splitting the explanation for each of their experiments in individual paragraphs.\", \"rating\": \"2: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}" ] }
BkghJoRNO4
Cross-Linked Variational Autoencoders for Generalized Zero-Shot Learning
[ "Edgar Schönfeld", "Sayna Ebrahimi", "Samarth Sinha", "Trevor Darrell", "Zeynep Akata" ]
Most approaches in generalized zero-shot learning rely on cross-modal mapping between an image feature space and a class embedding space or on generating artificial image features. However, learning a shared cross-modal embedding by aligning the latent spaces of modality-specific autoencoders is shown to be promising in (generalized) zero-shot learning. While following the same direction, we also take artificial feature generation one step further and propose a model where a shared latent space of image features and class embeddings is learned by aligned variational autoencoders, for the purpose of generating latent features to train a softmax classifier. We evaluate our learned latent features on conventional benchmark datasets and establish a new state of the art on generalized zero-shot as well as on few-shot learning. Moreover, our results on ImageNet with various zero-shot splits show that our latent features generalize well in large-scale settings.
[ "generalized zero-shot learning", "zero-shot learning", "few-shot learning", "image classification" ]
Accept
https://openreview.net/pdf?id=BkghJoRNO4
https://openreview.net/forum?id=BkghJoRNO4
ICLR.cc/2019/Workshop/LLD
2019
{ "note_id": [ "rklvbwkYKE", "SkxNwZAPFN", "rkeupP1wFN" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1554736894849, 1554665819746, 1554606016312 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/LLD/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/LLD/Paper40/AnonReviewer2" ], [ "ICLR.cc/2019/Workshop/LLD/Paper40/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\"}", "{\"title\": \"Unclear writing with seemingly SOTA results on generalized zero-shot learning tasks\", \"review\": \"Pros:\\n- SOTA results\", \"cons\": [\"generally written in an unclear way\", \"Title should say \\\"zero-shot\\\"\", \"The indices of the covariance matrices in Fig 1 appear flipped\", \"Figure captions lack important information (for example, in Fig 1 there is no mention that the bottom VAE is for the class embeddings, and they also use c in the figure but c(y) in the text)\", \"not clear whether the results in Fig 2 for ImageNet are over multiple seeds, no error bars.\"], \"rating\": \"3: Marginally above acceptance threshold\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}", "{\"title\": \"Interesting work on multi-modal generalized zero shot learning\", \"review\": \"Summary: The authors propose a VAE architecture that leverages multi-modal information, namely images and text in order to learn a matched latent space representation. The representation is created by minimizing two additional objectives in addition to the beta-VAE loss: the cross-reconstruction (CA) and the distribution-alignment (DA) regularizers. The learnt joint-posterior is then utilized to train a classifier in a GZSL setting. The method posed shows promising results as it beats SOTA in several benchmarks but should address the following points in the camera ready version.\", \"major\": [\"There is no motivation or reasoning specified for the L1 cross-reconstruction (CA) loss in Equation 2. If the decoder is a gaussian then the natural assumption is a likelihood proportional to the L2 loss. If there is indeed a Laplace-likelihood this should be clearly stated. If not, then a small ablation study / discussion demonstrating the difference between the L1 & L2 (and corresponding distributional assumptions) for the CA should be provided.\", \"For the classifier part of the model is the dimensionality of the softmax fixed? Or does it simply refer to which sample it is associated with as in few-shot learning? In addition, how is the classifier trained? I.e. does it use both \\\\mu's and \\\\Sigma's ? Are they concatenated? Passed through separate networks?\", \"The final loss does not show the dependence on the hyper-parameters that weight the different terms of the loss; specifically the L_{CA} term seems to be hyper-parameter free? Training of multi-objective VAEs critically relies on the scale of these hyper-parameters. Beta in the beta-VAE is also not specified. These are critical and should be described.\"], \"minor\": [\"are the image features extracted from a pre-trained Resnet-101 model or is the encoder a Resnet-101 model? This should be made clear.\", \"How many posterior samples are extracted for classification? How are they used? Is a classification made for each sample or is the latent representation averaged / concatenated and then classified? Why isn\\u2019t just the mean used as is standard in a test-setting for VAEs?\", \"title is missing the word \\u201cShot\\u201d\"], \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
r1esys0Nd4
Train Neural Network by Embedding Space Probabilistic Constraint
[ "Kaiyuan Chen", "Zhanyuan Yin" ]
Using higher order knowledge to reduce training data has become a popular research topic. However, the ability for available methods to draw effective decision boundaries is still limited: when training set is small, neural networks will be biased to certain labels. Based on this observation, we consider constraining output probability distribution as higher order domain knowledge. We design a novel algorithm that jointly optimizes output probability distribution on a clustered embedding space to make neural networks draw effective decision boundaries. While directly applying probability constraint is not effective, users need to provide additional very weak supervisions: mark some batches that have output distribution greatly differ from target probability distribution. We use experiments to empirically prove that our model can converge to an accuracy higher than other state-of-art semi-supervised learning models with less high quality labeled training examples.
[ "probability", "constraint", "constraint learning", "weak supervision", "embedding", "deep neural network" ]
Accept
https://openreview.net/pdf?id=r1esys0Nd4
https://openreview.net/forum?id=r1esys0Nd4
ICLR.cc/2019/Workshop/LLD
2019
{ "note_id": [ "BkgJSxpG9V", "Skgk0KR9FV", "ryxjFza5FE", "rklRNBltFE" ], "note_type": [ "decision", "official_review", "official_review", "official_review" ], "note_created": [ 1555382326610, 1554864582949, 1554858626765, 1554740533800 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/LLD/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/LLD/Paper39/AnonReviewer1" ], [ "ICLR.cc/2019/Workshop/LLD/Paper39/AnonReviewer3" ], [ "ICLR.cc/2019/Workshop/LLD/Paper39/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\"}", "{\"title\": \"Leveraging additional supervision to handle label distribution shift\", \"review\": \"This paper tries to incorporate label distribution into model learning, when a limited number of training instances is available. \\nIntuitively, the output label distribution could be wrongly biased, and the prior information like label distribution could be helpful. \\nTo handle this problem, the authors propose two different techniques, the first regulate the output distribution and the second regularize the constructed representation. \\nPerformance comparison demonstrated the effectiveness of the proposed method when only a limited number of instances are available. \\n\\nI think the studied problem is interesting and the proposed solution is novel and reasonable. \\nMy main concern is about the assumption of the algorithm. \\nThe proposed learning algorithm assumes that the algorithm can access a relative accurate label distribution, and the output distribution regularization depend on this term. \\nBut for real world applications, it could be hard to get such knowledge, since in order to get the required annotation (as described in Appendix), the user needs to have a good understanding of the real distribution or annotate all instances in that batch. \\nBesides, I noticed that in the experiment results, the proposed method sometimes achieves worse performance than baselines when all training data is available. \\nThis phenomenon seems to me implies that the proposed method cannot fully leverage the additional information, as intuitively, with more information, it should perform better.\", \"rating\": \"3: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Nice performance, but needs some work before publication\", \"review\": \"This paper introduces a method to perform semi-supervised learning with deep neural networks. The model is trained on a few labeled examples and the expected label distribution to achieve relatively high accuracy, given the small training size. The problem is motivated well and provides a clear introduction and background of related work. Included figures provide useful reference for the paper and the experiments demonstrate that the approach works well.\\n\\nI found that the assumption that the probability distribution of true labels could be well known to not be very realistic. I think it would be quite interesting to see how your approach performs when faced with different levels of error (including non-gaussian error) in the probability distribution. The assumption that out-of-distribution batches are identified through weak supervision seems like a large caveat to me. \\n\\nWhen discussing the Model Description in Section 4, you say: \\u201cTo guarantee that our comparison focuses on output probability distribution instead of one single instance\\u2019s label, we train our models with batch size 128.\\u201d This statement seems overly specific to me. How does a batch size of 128 specifically guarantee that your comparison focuses on the output probability distribution? Would any sufficiently large batch size work? How do you find sufficiently large batches?\\n\\nOverall, the paper presents an architecturally simple solution that seems to work well to improve accuracy in a semi-supervised setting. \\n\\nThe paper has quite a few grammatical and formatting errors. It should be thoroughly read and edited before publication (I noted some of the grammatical issues below that I hope will be helpful).\", \"editing_suggestions\": [\"There appear to be several instances where the spacing between words and citations is incorrect. There are also a few times where parenthesis around citations were not properly closed.\", \"missing closing parenthesis in \\u201cconstraints (Stewart & Ermon (2016),\\u201d\", \"check spacing in citations - Ho et al.\", \"It would be nice to see a reference to prior work which introduces \\u201chigher level knowledge\\u201d in the last sentence of the \\u201cWeak Supervision\\u201d paragraph.\"], \"small_notes_on_language_that_may_make_it_more_readable\": [\"\\u201cneural networks fails to learn the decision boundary correctly from limited number of examples\\u201d -> \\u201cneural networks fail to learn the decision boundary correctly from a limited number of examples\\u201d\", \"\\u201cwe perform our probabilistic constraint on neural network\\u2019s embedding space,\\u201d -> \\u201cwe perform our probabilistic constraint on the neural network\\u2019s embedding space,\\u201d\", \"\\u201clow dimensional space through autoencoder.\\u201d -> \\u201clow dimensional space through (an or the) autoencoder.\\u201d\", \"\\u201cour model can converge to an high accuracy faster than\\u201d -> \\u201cour model can converge to a high accuracy faster than\\u201d\", \"\\u201crequire large quantity and high-quality labels for training\\u201d-> \\u201crequire a large quantity of high-quality labels for training\\u201d\", \"\\u201ccan reflect neural network\\u2019s confidence towards certain label.\\u201d -> \\u201ccan reflect the neural network\\u2019s confidence towards a certain label.\\u201d\", \"\\u201cInstead of performing counting the arguments of the maxima for all labels, which is not inefficient,\\u201d -> \\u201cInstead of counting the arguments of the maxima for all labels, which is not inefficient,\\u201d\", \"\\u201cWe add additional embedding layer with width 40,\\u201d -> \\u201cWe add an additional embedding layer with width 40,\\u201d\", \"\\u201cWe experiment our model under different level of constraints. \\u201c -> \\u201cWe experiment under different level of constraints. \\u201c\", \"\\u201cthat constraints output probability distribution in an embedding space.\\u201d -> \\u201cthat constrains the output probability distribution in an embedding space.\\u201d\", \"\\u201cwe need far less high quality training examples to reach high accuracy\\u201d -> \\u201cwe need far fewer high quality training examples to reach high accuracy\\u201d\", \"\\u201cThus, we conclude jointly optimizing\\u201d -> \\u201cThus, we conclude that jointly optimizing\\u201d\", \"\\u201cIt can me written mathematically\\u201d -> \\u201cIt can be written mathematically\\u201d\", \"\\u201cThus, we adopt the structure of decoder of autoencoder and \\u201c -> \\u201cThus, we adopt the structure of the decoder of an autoencoder and\\u201c\", \"\\u201cOur proposed method uses unsupervised loss\\u201d -> \\u201cOur proposed method uses an unsupervised loss\\u201d\", \"\\u201cand uses domain knowledge of output probability distribution to determine the actual decision boundaries.\\u201d -> \\u201cand uses domain knowledge of the output probability distribution to determine the actual decision boundaries.\\u201d\"], \"rating\": \"2: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"The work is highly relevant to the workshop. Results are good. Writing is sufficiently clear. Maths and notation are not.\", \"review\": \"1/ Introduction\\nThe first sentence is embarrassing as 'probability' has a mathematical definition.\\nThe problem target precisely the workshop topic. \\n\\n2/ Related work\\nOk.\\n\\n3/ Embedding...\\nThere are inconsistencies in the notations and many typos.\\nIs Q a 2d vector ? Why is it compared to P(k) later ?\\n\\n4/ Evaluation\\nResults are competitive.\", \"rating\": \"3: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}" ] }
ryloJjA4d4
Unsupervised Scalable Representation Learning for Multivariate Time Series
[ "Jean-Yves Franceschi", "Aymeric Dieuleveut", "Martin Jaggi" ]
Time series constitute a challenging data type for machine learning algorithms, due to their highly variable lengths and sparse labeling in practice. In this paper, we tackle this challenge by proposing an unsupervised method to learn universal embeddings of time series. Unlike previous works, it is scalable with respect to their length and we demonstrate the quality, transferability and practicability of the learned representations with thorough experiments and comparisons. To this end, we combine an encoder based on causal dilated convolutions with a novel triplet loss employing time-based negative sampling, obtaining general-purpose representations for variable length and multivariate time series.
[ "time series", "representation learning", "unsupervised learning" ]
Accept
https://openreview.net/pdf?id=ryloJjA4d4
https://openreview.net/forum?id=ryloJjA4d4
ICLR.cc/2019/Workshop/LLD
2019
{ "note_id": [ "HkxyyX43tN", "rJgGojoDFN", "HkljfJVUKE" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1554952918746, 1554656153779, 1554558739196 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/LLD/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/LLD/Paper38/AnonReviewer2" ], [ "ICLR.cc/2019/Workshop/LLD/Paper38/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\", \"comment\": \"The paper presents some reasonable experiments and approaches for unsupervised time series. However as mentioned by R2 there is several issues. The paper also overclaims a bit some of the novelty. As noted by R1 and the metareviewer there is other works using triplet loss for timeseries, a relatively common approach in temporal dataset (e.g. video, audio), the popular causal convolution structure from wavenet is also quite well known, contributions should be more clear.\"}", "{\"title\": \"Weak reject\", \"review\": \"Paper summary:\\n\\nThis paper proposes a novel unsupervised embedding for time-series. Its architecture mainly consists in a series of dilated causal convolutions, followed by a temporal averaging to obtain a representation which is independent on the length of the time-series. The authors propose a triplet loss with negative mining to train the embedding, which is novel for real-valued time-series. This method is experimentally validated on a classification and a regression task.\", \"general_opinion\": [\"Pros:\", \"Good writing\", \"Detailed appendix with experimental hyperparameters, so that the results are pretty reproducible.\", \"For classification and regression, the proposed method reaches results close to the state-of-the-art.\", \"Cons:\", \"The authors state that the embedding is unsupervised, but in Appendix C they acknowledge that it is trained with early-stopping based on the final classification accuracy, thereby relying on an implicit supervision.\", \"This method does not improve over the state-of-the-art on time-series classification, even though it is its natural purpose\", \"The experimental validation of the proposed method is weak (cf detailed method).\", \"I have some concerns at the conceptual level (cf detailed questions).\", \"Taking into consideration these aspects, I tend to vote for a weak rejection.\"], \"detailed_questions\": [\"On a conceptual level, the ideas underlying the use of a triplet loss explained in the 3rd paragraph of section 2 seems a bit incomplete to me. On the one hand, the authors state that the embedding of a sub-series should be close to the embedding of the series. On the other hand, they also state that this embedding should be far from the embedding of a randomly sampled sub-series, possibly in the same long time-series. This seems contradictory, because if they belong to the same global time-series, they are both sub-series of the global time-series and therefore should be close. Also, the fact that no scale is taken into account when defining sub-series seems quite irrealistic.\", \"Why is using a *causal* embedding important for classification purposes?\", \"Experimentally, what are the results when the number of negative samples, K, varies? Experiments have been performed with this parameter varying as an ensemble is taken. It is a pity that the importance of this value is not reported, as it would have provided an intuition on its importance.\", \"The runtimes reported in Table 1 are a bit strange. Why does the runtime of the raw values vary so much (x30) when moving from daily to quarterly predictions, while the runtime of the representations diminishes (/3)?\"], \"rating\": \"2: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Exciting work in a challenging and useful unsupervised setting\", \"review\": \"This paper provides an unsupervised representation learning algorithm for performing classification and regression in multivariate time series. It relies on a combination of cutting-edge techniques: triplet loss, stacked causal dilated convolutions (\\u00e0 la WaveNet), weight normalization, and residual connections. Although these techniques had been published before in isolation, they had never been implemented in combination up to this paper. Therefore, the contributions of this paper are novel enough for the ICLR LLD workshop.\\n\\nThe discussion of prior literature is solid. However, i will point out that the claim\\n\\\"this works is the first in the time series literature to propose a triplet loss for feature learning\\\"\\nis wrong. The paper of Jansen et al. ICASSP 2017 \\\"Unsupervised learning of semantic audio representations\\\" (https://arxiv.org/abs/1711.02209) is one counterexample.\\n\\nThe rest of the paper is very clear and eloquent. I recommend this paper for acceptance.\", \"rating\": \"5: Top 15% of accepted papers, strong accept\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}" ] }
BkecJjCEuN
LABEL-EFFICIENT AUDIO CLASSIFICATION THROUGH MULTITASK LEARNING AND SELF-SUPERVISION
[ "Tyler Lee", "Ting Gong", "Suchismita Padhy", "Andrew Rouditchenko", "Anthony Ndirango" ]
While deep learning has been incredibly successful in modeling tasks with large, carefully curated labeled datasets, its application to problems with limited labeled data remains a challenge. The aim of the present work is to improve the label efficiency of large neural networks operating on audio data through a combination of multitask learning and self-supervised learning on unlabeled data. We trained an end-to-end audio feature extractor based on WaveNet that feeds into simple, yet versatile task-specific neural networks. We describe several easily implemented self-supervised learning tasks that can operate on any large, unlabeled audio corpus. We demonstrate that, in scenarios with limited labeled training data, one can significantly improve the performance of three different supervised classification tasks individually by up to 6% through simultaneous training with these additional self-supervised tasks. We also show that incorporating data augmentation into our multitask setting leads to even further gains in performance.
[ "multitask learning", "self-supervised learning", "end-to-end audio classification" ]
Accept
https://openreview.net/pdf?id=BkecJjCEuN
https://openreview.net/forum?id=BkecJjCEuN
ICLR.cc/2019/Workshop/LLD
2019
{ "note_id": [ "Hyxg_gpGqE", "Sye52SE_FN", "SJg86aKA_E" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1555382376059, 1554691505689, 1554058685995 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/LLD/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/LLD/Paper37/AnonReviewer1" ], [ "ICLR.cc/2019/Workshop/LLD/Paper37/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\"}", "{\"title\": \"Intriguing approach, but unclear relationship to transfer learning\", \"review\": \"The authors propose a technique for improving the performance of audio classification neural networks by simultaneously training them to solve various auxiliary self-supervised tasks. This is achieved by sharing a common \\\"trunk\\\" network with multiple \\\"head\\\" networks, each tailored to solve a specific task. Typically, a network will be optimized with four head networks, one to solve a main task and the rest to solve three auxiliary tasks. Since the trunk is shared for the different tasks, the idea is that the auxiliary tasks will result in an improved trunk network compared to only optimizing the main objective. The authors present experimental results on three main tasks: audio tagging, speaker identification, and speech command classification. These were coupled with three auxiliary, self-supervised tasks: next-step prediction, denoising, and upsampling. Since training data for these auxiliary tasks could be created from unlabeled data, it represents a virtually inexhaustible supply of potential training data. Experimental results for this architecture showed a significant improvement was obtained by joint training with these auxiliary tasks. However, the authors also present results for a transfer learning experiment, where the network was first trained on the auxiliary tasks and then fine-tuned for the main task. This yielded an even greater improvement, which calls into question the utility of the proposed method. Unfortunately, no analysis of explanation of this fact is presented.\\n\\nThe description of the method is quite comprehensive, but at times vague and lacking some details. First, while the paper describes three main tasks (audio tagging, speaker identification, and speech command classification), much of the description of the architecture only refers to audio tagging. The method for the joint training is also only mentioned towards the end, in the appendix. It would be clearer if some of these details were included in the main text. It is also not completely clear whether the trunk network is jointly trained between all three main tasks and the auxiliary tasks, or between a main task and the auxiliary tasks. While it is likely the latter, it is never clearly spelled out. There is also little motivation for the auxiliary tasks. Why were these chosen as opposed to others? For example, what do the authors expect the next-step prediction task to add? If it is simply a matter of predicting the next sample in a time series, this shouldn't necessary require a very high-level knowledge of the audio signal structure, since continuity already provides a very strong prior. Similar questions can be posed for the other auxiliary tasks. Furthermore, some auxiliary tasks are trained with an L^2 loss, while others are trained with a smoothed L^1 loss. Why were these different choices made? Finally, the method of choosing hyperparameters is also not clear. The authors state that they were chosen \\\"heuristically favoring performance on the main task\\\". What does this mean?\\n\\nThe experiments provide interesting results, but are not as complete as they could be. For example, there are several popular datasets for audio tagging. why were the particular datasets chosen? More importantly, why are no results for state-of-the-art methods presented? It is not necessary that the proposed architecture perform better than the current state of the art, but it is important to provide a context for the results. The authors also add a significant amount of training data to the problem through the auxiliary training tasks, but neglect to discuss any impact on training time. While this may not always be an issue, it is a relevant trade-off to be made when considering when to adopt the proposed architecture. Finally, the authors make a claim about additivity (or complementarity) of their proposed approach when coupled with data augmentation. It would be interesting to see to what extent the changed labels overlap between the two methods. If they are indeed complementary, there would be little overlap between the set of labels changed by adding one and then the other.\\n\\nFinally, the transfer learning results (as mentioned above), pose a significant problem with respect to the utility of the proposed algorithm. If we can simply train the network trunk for the auxiliary tasks separately, and then add a new head network and train that for the main task, why should we consider the whole apparatus proposed in the current manuscript? This holds doubly true considering that the transfer learning approach significantly outperforms the proposed approach for the considered tasks. There may be some properties like reduced training time, fewer hyperparameters to select, and so on, but no discussion is provided. Since this is a potentially very important alternative, analysis and discussion of the merits of the two approaches is strictly necessary.\", \"some_more_minor_comments_follow\": [\"With respect to general-purpose audio representations, the authors may want to mention the audio scattering transforms developed by Mallat and collaborators.\", \"In Section 3, the authors mentioned experiments being made with \\\"0 to 3\\\" auxiliary tasks. The subsequent experiments only present results for 0 _and_ 3 auxiliary tasks, with no results for 1 or 2 tasks. This should be corrected.\", \"One dataset is described as containing \\\"uncompressed PCM\\\", another as \\\"WAVE format files\\\", while the format of the third is not specified. If the authors insist on including this information, they should be consistent in their descriptions. What is the format of the third dataset? Are the WAVE files also stored as PCM? Or are they stored in \\u00b5-law or some other format?\", \"The authors make the claim that \\\"spectral/cepstral representations of audio ... significantly restrict the range of audio processing tasks which they can perform\\\". A finely sampled spectral representation contains enough information for synthesizing a new signal which sounds virtually identical to the original. Where they may fall short is in sample-by-sample reconstructions, since they do not include the phase. One could argue that this sample-by-sample reconstruction is rarely what's desired in audio classification tasks. Indeed, the type of tasks for which they are necessary mostly includes low-level processing tasks such as the auxiliary tasks introduced in this paper. The fact that that there is such an \\\"impedance mismatch\\\" between the main and auxiliary tasks should be cause for concern.\", \"The MAP@3, Top-1, and Top-5 metrics, although well-known, should be defined completeness.\", \"The difference between the \\\"baseline\\\" and \\\"none (0)\\\" rows in Table 1 is a bit subtle. While the second does not include any unlabeled data, it still performs joint training on the main and auxiliary tasks but on the main task's training data. This is not obvious from a first glance and should be clarified.\", \"In Table 2, it would be useful to provide results for \\\"NI + PS\\\", \\\"NI + PS + MTL100\\\" and the same for \\\"MTL500\\\". In the interest of space, however, it may be better to simply sketch these results in the text if they do not add too much more information.\", \"The authors state that \\\"Interestingly, the performance gains from augmenting with noisy data are similar to those obtaining by training the main task jointly with a self-supervised noise-reduction task.\\\" Why is this interesting? Why could this be the case? Is there a similarity in label assignment as well? I suggest the authors finish this train of though.\", \"In Table 3, please write out the full names of the tasks as in Table 1.\", \"There are several capitalization errors in the bibliography. In particular, several uppercase characters have been converted to lowercase: \\\"English\\\", \\\"Mandarin\\\", \\\"ChiME\\\", \\\"PyTorch\\\".\", \"Having figures in gray make them hard to read, especially when printed. Unless there is some compelling reason not to, I suggest they be regenerated in black.\", \"Please provide a definition for dilated convolution.\", \"The sequence in Section 6.2.1 should be delimited by parentheses, not curly braces (which delimit sets).\", \"The \\\"smoothed L^1\\\" norm is also known as the Huber loss in statistics and other fields.\", \"\\\"Python\\\" has the first letter capitalized.\", \"Please define SNR.\", \"Finally, I strongly suggest the authors make their code available to the general public.\"], \"rating\": \"3: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Accept\", \"review\": \"Summary:\\nThe paper proposes a method for using multi-task learning with a variety of supervised and unsupervised audio datasets to train a convolutional network model that operates on the raw waveform. The results show that their technique improves results on three datasets compared to training on those datasets alone, and the results scale with the amount of training data.\", \"pros\": \"The paper is well written and a pleasure to read. The experiments are rigorous and well explained.\\n\\nI think this is the first time I've seen a multi-task architecture for raw audio.\", \"cons\": \"These datasets are not what I'd call \\\"limited labeled data\\\"---Google Speech Commands, for example, has thousands of examples of each word. The datasets are smaller than LibriSpeech, which you use as the big unsupervised dataset, but ironically LibriSpeech probably counts more as \\\"limited labeled data\\\": consider how many times the word \\\"xylophone\\\" might turn up (probably not often). Perhaps a better experiment would be to recognize these rare words, given just a few examples, and see how well multi-task learning + unsupervised learning helps with that.\\n\\nThe paper \\\"Listening to the World Improves Speech Command Recognition\\\" (https://arxiv.org/abs/1710.08377v1, accepted to AAAI 2017) is about a related approach: transfer learning, as opposed to multi-task learning, for audio tasks. You should cite this paper and compare your method with theirs. \\n\\n93% accuracy on Google Speech Commands seems weirdly low. I myself have trained a model that operates on the raw waveform, without any data augmentation/multi-task learning, and gets 95% accuracy for the full 30 commands, not just the 12 labels you picked. It might be because you don't use any recurrent layers and just use convolutional layers. It's not that important given the other results, but maybe it indicates a bug. I hope you release your code!\", \"rating\": \"3: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
Sylq1jRV_N
Sub-Task Discovery with Limited Supervision: A Constrained Clustering Approach
[ "Phillip Odom", "Aaron Keech", "Zsolt Kira" ]
Hierarchical reinforcement learning captures sub-task information to learn modular policies that can be quickly adapted to new tasks. While hierarchies can be learned jointly with policies, this requires a lot of interaction. Traditional approaches require less data, but typically require sub-task labels to build a task hierarchy. We propose a semi-supervised constrained clustering approach to alleviate the labeling and interaction requirements. Our approach combines limited supervision with an arbitrary set of weak constraints, obtained purely from observations, that is jointly optimized to produce a clustering of the states into sub-tasks. We demonstrate improvement in two visual reinforcement learning tasks.
[ "discovery", "limited supervision", "constrained clustering", "hierarchical reinforcement", "captures", "information", "modular policies", "new tasks", "hierarchies", "policies" ]
Accept
https://openreview.net/pdf?id=Sylq1jRV_N
https://openreview.net/forum?id=Sylq1jRV_N
ICLR.cc/2019/Workshop/LLD
2019
{ "note_id": [ "S1lEqxazqV", "BJeLBiDdF4", "rJln9z1vFE" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1555382411721, 1554705214405, 1554604692216 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/LLD/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/LLD/Paper36/AnonReviewer1" ], [ "ICLR.cc/2019/Workshop/LLD/Paper36/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\"}", "{\"title\": \"Well-motivated, well-written paper\", \"review\": [\"The authors propose a semi-supervised strategy for sequential decision-making. They initially use supervised imitation learning to approximate the expert policy; then they extract features through weak constraints (temporal and global) from this policy. In addition, they use an autoencoder to complement the aforementioned features.\", \"well-written paper\", \"well-motivated with clear ideas\", \"could elaborate more on the differences with the literature\", \"stronger ablation study about each constraint's contribution should be performed\"], \"some_additional_question_that_were_not_clear_while_studying_the_paper\": \"1) Why do the authors use 8 clusters? What's the intuition behind this number?\\n2) What are the exact training/implementation details? For instance, how many episodes did their method require? \\n3) The authors mention that there might be an imbalance among the different types of constraints. Could they elaborate on that? \\n4) Why are the data with high uncertainty identified as useful sub-goals?\", \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}", "{\"title\": \"motivation and methodology could use more substantiation\", \"review\": [\"The authors propose a semi-supervised clustering approach to discover subtasks for sequential decision-making. The authors formulate local temporal constraints using entropy with a sliding window over a policy. Global, state-based constraints are determined using expert policy features from imitation learning and a visual autoencoder representation. The authors validate their approach on two VizDoom tasks, Pickup and Maze, treating specific actions as different sub-tasks.\", \"Shows clear comparison with ablations of different constraints.\", \"The problem could be better motivated. There seems to be a gap between the initial problem, sequential decision making, and the approach of constrained clustering. It is not obvious what the \\u201clabeling and sample generation requirements\\u201d are.\", \"How will sub-tasks be used in the downstream decision-making? Along those lines, is there an empirical way to measure the downstream efficacy of this approach?\", \"The formulation of the problem was hard to follow. How can the constraints be used to make decisions? Can you clarify the difference between ground truth labels of sub-tasks and different ground truth constraints?\", \"\\u201c... learns from limited (or no) labeled data by generating weak constraints that do not require sub-task labels or access to a simulator for evaluation.\\u201d Is there a trade-off study for none to some limited labels?\", \"While the paper\\u2019s motivation appears suggests a no/limited-label approach, it is not evident how the clustering approach is effective for the downstream task.\"], \"rating\": \"2: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}" ] }
S1x7WjnzdV
Spatial Broadcast Decoder: A Simple Architecture for Disentangled Representations in VAEs
[ "Nick Watters", "Loic Matthey", "Chris P. Burgess", "Alexander Lerchner" ]
We present a neural rendering architecture that helps variational autoencoders (VAEs) learn disentangled representations. Instead of the deconvolutional network typically used in the decoder of VAEs, we tile (broadcast) the latent vector across space, concatenate fixed X- and Y-“coordinate” channels, and apply a fully convolutional network with 1x1 stride. This provides an architectural prior for dissociating positional from non-positional features in the latent space, yet without providing any explicit supervision to this effect. We show that this architecture, which we term the Spatial Broadcast decoder, improves disentangling, reconstruction accuracy, and generalization to held-out regions in data space. We show the Spatial Broadcast Decoder is complementary to state-of-the-art (SOTA) disentangling techniques and when incorporated improves their performance.
[ "disentangling", "VAE", "coordconv", "representation learning", "untangling" ]
Accept
https://openreview.net/pdf?id=S1x7WjnzdV
https://openreview.net/forum?id=S1x7WjnzdV
ICLR.cc/2019/Workshop/LLD
2019
{ "note_id": [ "r1xI7wJtF4", "r1eHYs0wFE", "S1gWju1DFV" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1554736926271, 1554668412871, 1554606233352 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/LLD/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/LLD/Paper35/AnonReviewer2" ], [ "ICLR.cc/2019/Workshop/LLD/Paper35/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\"}", "{\"title\": \"Good work and write-up but unlcear about fit with this workshop\", \"review\": [\"Pros:\", \"extensive and thorough experimentation\", \"interesting and original idea\", \"proposed an approach that is complimentary to previous approaches and helps improve SOTA results\", \"comprehensive supplementary\"], \"cons\": [\"not immediately clear how this work relates to the limited labels setting\"], \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}", "{\"title\": \"An Interesting Analysis of Coordinate Tiled VAE Decoders\", \"review\": \"Summary: the authors present a simple extension of VAEs (and CoordConv VAEs) and demonstrate through a variety of experiments that the proposed tiling and (1x1) coord-conv solution produces a more disentangled representation. The presentation of detailed ablation studies is helpful in understanding exactly what benefits are brought by 1x1 convolutions vs. upsampling The empirical results are strong and promising, but a few points should be addressed in the final version.\", \"major\": [\"The results comparing Spatial Broadcast VAEs to CoordConv VAEs is a pretty critical result and should be moved into the main text from the appendix. Note that this should be present for all experiments, including the ones demonstrating the rate-distortion curves. In addition it would be interesting to contrast the CoordConv VAE with a few upsample layers, followed by 1x1 convolutions (as in the Spatial Broadcast VAE) to see if the effect is mainly from tiling or from the lack of upsampling blocks.\", \"A simple evaluation of disentanglement would be to use a linear classifier on the (mean) posterior sample after the training of the VAE. This would provide a more informative evaluation of (linear) separation in the latent space. This has been done in Associative Compressive Networks by Alex Graves for example.\"], \"minor\": [\"Figure labeling (i.e. a, b) missing on figure 3.\", \"Consistency between letter figure labeling and left/right.\", \"A4: what is the condition for termination of training? Early Stopping? If so what are the hyper-parameters used there?\"], \"rating\": \"3: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
rJlzbihzdE
Enhancing Generalization of First-Order Meta-Learning
[ "Mirantha Jayathilaka" ]
In this study we focus on first-order meta-learning algorithms that aim to learn a parameter initialization of a network which can quickly adapt to new concepts, given a few examples. We investigate two approaches to enhance generalization and speed of learning of such algorithms, particularly expanding on the Reptile (Nichol et al., 2018) algorithm. We introduce a novel regularization technique called meta-step gradient pruning and also investigate the effects of increasing the depth of network architectures in first-order meta-learning. We present an empirical evaluation of both approaches, where we match benchmark few-shot image classification results with 10 times fewer iterations using Mini-ImageNet dataset and with the use of deeper networks, we attain accuracies that surpass the current benchmarks of few-shot image classification using Omniglot dataset.
[ "meta-learning", "generalization", "few-shot learning" ]
Accept
https://openreview.net/pdf?id=rJlzbihzdE
https://openreview.net/forum?id=rJlzbihzdE
ICLR.cc/2019/Workshop/LLD
2019
{ "note_id": [ "HJeu2gaf9N", "ryeQgHNaYV", "SJlECS-_FE" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1555382448271, 1555018986680, 1554679244272 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/LLD/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/LLD/Paper34/AnonReviewer2" ], [ "ICLR.cc/2019/Workshop/LLD/Paper34/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\"}", "{\"title\": \"This in an interesting simple extension, results seem promising, but very preliminary\", \"review\": \"The paper introduces a simple extension of first-order MAML/Reptile algorithm. It proposes to stop the inner loop if the magnitude of the update does not exceed a certain threshold. Exprerimentation looks promising, although confusing. It is not clear why Table 1 accuracies are different from Table 2. For Table 2 it makes sense to demonstrate learning curves to emphasize the convergence speed. Metalearning algorithms are known to have high variance, so it makes sense to report error bars across multiple seeds.\\n\\nAll in all, the idea to improve the convergence is worth exploring and interesting to discuss, but the paper is a bit raw.\", \"rating\": \"3: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"A first step towards gaining a better understanding of first-order meta-learning\", \"review\": \"This work presents an empirical study of the first-order meta-learning Reptile algorithm, in particular investigating a proposed regularization technique and deeper networks. Their regularization method is to train as usual for the first \\\\psi steps and subsequently only apply the training update to the learned initialization if the difference between that initialization before and after the task-specific update on the current task is greater than another hyperparameter \\\\gamma.\", \"they_show_experimentally_that_when_training_reptile_using_the_above_procedure_it_overfits_less\": [\"the gap between the training and testing accuracy is smaller (though not by a significant amount). What is perhaps more impressive is that they can obtain similar to state-of-the-art results on Omniglot and mini-ImageNet by applying 10x less updates than the corresponding state-of-the-art methods. Finally, they show that using deeper networks yields a benefit on Omniglot. This is an interesting observation, contradicting the intuition that when learning from little data larger networks would be more prone to overfitting.\", \"Some concerns / suggestions:\", \"I\\u2019m curious if a similar behaviour to the proposed regularization can be obtained simply by using a learning rate schedule, and / or using ADAM in the outer loop as well.\", \"The results in Table 2 are slightly lower than MAML and Reptile\\u2019s and it\\u2019s not clear how many additional iterations would be required to match those results. It may be that to squeeze out that last bit of performance a lot more updates are required even with the proposed method. It therefore seems more informative to keep running the proposed method until that performance is reached and then compare the number of iterations required. Alternatively, showing the curve of the performance on held-out data throughout training would address this point as well.\", \"Regarding the experiments with the deeper networks: the authors describe this as using deeper networks in the inner loop specifically. It found this confusing. Are the additional weights only \\u201cfast weights\\u201d (e.g. part of a task-specific classifier) that are not meta-learned (by the outer loop)? It would be useful to be more specific about this.\", \"It would be interesting to present deeper network experiments on mini-ImageNet instead of (or in addition to) Omniglot, since it\\u2019s a more realistic and challenging benchmark with larger resolution images.\", \"From what I understand, the proposed modifications are not applicable exclusively to first-order meta-learning. I would therefore be curious about whether applying these to second-order methods (e.g. full MAML) would yield similar conclusions.\", \"Overall, I feel that meta-training is still poorly understood so I think empirical investigations like the one in this work are useful for gaining stronger intuitions for best practices in this setup. I therefore recommend acceptance of this work.\"], \"rating\": \"3: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
SJg2iEmldV
Augmented Memory Networks for Streaming-Based Active One-Shot Learning
[ "Anonymous" ]
One of the major challenges in training deep architectures for predictive tasks is the scarcity and cost of labeled training data. Active Learning (AL) is one way of addressing this challenge. In stream-based AL, observations are continuously made available to the learner that have to decide whether to request a label or to make a prediction. The goal is to reduce the request rate while at the same time maximize prediction performance. In previous research, reinforcement learning has been used for learning the AL request/prediction strategy. In our work, we propose to equip a reinforcement learning process with memory augmented neural networks, to enhance the one-shot capabilities. Moreover, we introduce Class Margin Sampling (CMS) as an extension of the standard margin sampling to the reinforcement learning setting. This strategy aims to reduce training time and improve sample efficiency in the training process. We evaluate the proposed method on a classification task using empirical accuracy of label predictions and percentage of label requests. The results indicates that the proposed method, by making use of the memory augmented networks and CMS in the training process, outperforms existing baselines.
[ "Active Learning", "Reinforcement Learning", "Few-Shot Learning" ]
Reject
https://openreview.net/pdf?id=SJg2iEmldV
https://openreview.net/forum?id=SJg2iEmldV
ICLR.cc/2019/Workshop/LLD
2019
{ "note_id": [ "Bkgxwt3McV", "r1lz6H9sFN", "rkgdSV6qY4", "r1lcClWKFN" ], "note_type": [ "decision", "official_review", "official_review", "official_review" ], "note_created": [ 1555380568312, 1554912698083, 1554859072098, 1554743506146 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/LLD/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/LLD/Paper33/AnonReviewer1" ], [ "ICLR.cc/2019/Workshop/LLD/Paper33/AnonReviewer3" ], [ "ICLR.cc/2019/Workshop/LLD/Paper33/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Reject\", \"comment\": \"Reviewers appreciated the idea but found it difficult to understand the details. Most notably major experimental details such as the datased are not described.\"}", "{\"title\": \"Better presentations and more experiments are needed\", \"review\": \"This paper presents a method for the stream-based active learning problem, which is a combination of active learning and one-shot learning.\\nThe paper frame this problem as a reinforcement learning problem, and thus an AL-strategy (policy) is learned by an RL agent.\", \"the_main_contribution_of_the_proposed_model_is_two_fold\": \"equipping memory networks and a special sampling trick named Class Margin Sampling to deal with noisy initial samples.\\n\\nThe presentation of this paper is good. It would be very helpful to talk (with figures) about the overall workflow of the model at the beginning of Section 3 such that readers can have a big picture before they dive into the detailed solutions of each part.\\nA running example is also necessary to explain the whole stream-based AL problem and the solution proposed by the authors.\\n\\nThe two contributions are not particularly innovative, and the distinctions between the proposed work and (Pang et al. 2018) could be illustrated more.\\nThe experiments on other datasets and comparisons with more baseline methods (even non-stram-based AL methods) are necessary to evaluate the proposed method.\\n\\nI think this paper would become a strong paper after improving the presentations and experiments. But for now, I would say it is not that ready for publishing.\", \"rating\": \"3: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Clear and well-executed workshop paper\", \"review\": \"This paper proposes a memory-augmented deep learning model for streaming-based one-shot active learning. It introduces Class Margin Sampling (CMS) which leverages known class information in the margin sampling process to improve sample efficiency in the context of active learning.\\n\\n CMS is well-motivated, explained, and is shown to improve results over baselines. The experiments presented in Table 1 and their accompanying discussion provide a nice illustration of the work performed. \\n\\nThe paper demonstrates the use of active learning in three known architectures with and without CMS and provides a nice explanation of the models and why they were chosen.\\n\\nOverall, the authors did a great job motivating and summarizing work that seems like a good fit for this workshop. A longer version of the paper would benefit from additional discussion of the hyper parameters used in the RL agent and additional experiments.\", \"minor_suggestions\": [\"It might be helpful to mention the dataset used in Section 4 instead of just in the appendix.\", \"\\u201clearning policy that generalize over different dataset, by using a generic embedding layers that maps dataset-dependent features to embeddings.\\u201d - plurality doesn\\u2019t seem consistent here.\", \"\\u201cBy this particular design, all first-instance Q-values provide little but no information about the model, as we always want the model to execute a label request to maximize the expected reward\\u201d - wording\", \"\\u201cstructure than the other models, that turns in a behavior characterized by:\\u201d - wording\", \"\\u201cIn stream-based AL, observations are continuously made available to the learner that have to decide whether to request a label or to make a prediction.\\u201d -> \\u201cIn stream-based AL, observations are continuously made available to the learner that has to decide whether to request a label or to make a prediction.\\u201d\", \"\\u201cThe model learn, with few examples per class, to make labelling decision online\\u201d -> \\u201cThe model learns, with few examples per class, to make labelling decision online\\u201d\"], \"rating\": \"3: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Interesting idea, not reproducible experiments\", \"review\": [\"This paper proposes bringing Augmented Memory Networks to streaming-based active learning, and experiments with 3 existing approaches of representing memory. Moreover, the authors propose an extension of margin based sampling, titled Class Margin Sampling (CMS).\", \"While the idea of the paper sounds interesting and useful, this paper however is very difficult to read and is not reproducible. While I understand that the 4 page constraint limits how ample the explanations can be, there are some crucial details missing, that could have been added by simply citing other papers. I believe once this paper is rewritten it can make for an interesting contribution, but in its current form it requires a major revision.\", \"Here are, in my opinion, the main weaknesses, in the order of importance:\", \"1. Experiments: the experimental section reports accuracies on a dataset that is never mentioned. What is the task, how many classes are you classifying into? What is the input space? There is no description nor citation for this.\", \"2. The paper is difficult to read, although I read it multiple times. Quite a few statements are not supported by explanations, which made it hard to follow the paper. For example:\", \"CMS is an extension to standard margin sampling, but standard margin sampling itself is never described or referenced. In order to understand why it needs improvement, we first have to understand how it works, even if it's a one sentence description.\", \"\\\"adding an external more explicit memory-structure could be helpful in increasing the accuracy of the system, similar to (Santoro et al., 2016b)\\\" -- so how is this work different than Santoro et al., 2016b?\", \"why is \\\"usually C_cms = C \\u00d7 2\\\" ?\", \"\\\"This is because the first sample in every episode shouldn\\u2019t have considerable bias towards a specific class, which anyhow should be considered noise.\\\" -- why would it have such bias and why should it be considered noise anyhow?\", \"\\\"followed by a reset operation of both memory and hidden state\\\" -- no sharing of info between tasks/classes. Why?\", \"I don't understand problem 1 described on page 3.\", \"\\\"Our task structure is not compatible with a similar scheme, \\\" -- why?\", \"3. Editing and grammar issues:\", \"Forgetting fullstops at the end of sentences: e.g., \\\"[...] increasing accuracy on one-shot predictions Given that the NTM [...]\\\", also at the end of LSTM Baseline Model paragraph.\", \"Issues with subject and verb in a sentence: \\\"all T samples from a class is used [...]\\\", \\\"Models that are capable of learning [...] is also of great interest\\\".\", \"\\\"Figure Figure 2a\\\"\"], \"rating\": \"1: Strong rejection\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}" ] }
B1eos4meuV
Unsupervised Functional Dependency Discovery for Data Preparation
[ "Zhihan Guo", "Theodoros Rekatsinas" ]
We study the problem of functional dependency (FD) discovery to impose domain knowledge for downstream data preparation tasks. We introduce a framework in which learning functional dependencies corresponds to solving a sparse regression problem. We show that our methods can scale to large data instances with millions of tuples and hundreds of attributes, while recovering true FDs across a diverse array of synthetic datasets, even in the presence of noisy data. Overall, our methods show an average F1 improvement of 2× against state-of-the-art FD discovery methods. Our system also obtains better F1 in downstream data repairing task than manually defined FDs.
[ "Functional Dependencies", "Sparse Regression", "Structure Learning", "L1-regularization", "Weak Supervision" ]
Accept
https://openreview.net/pdf?id=B1eos4meuV
https://openreview.net/forum?id=B1eos4meuV
ICLR.cc/2019/Workshop/LLD
2019
{ "note_id": [ "HkloEvkYtE", "SJgQWfJvKV", "S1eXfcYpOV" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1554736947418, 1554604538530, 1553992202705 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/LLD/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/LLD/Paper32/AnonReviewer2" ], [ "ICLR.cc/2019/Workshop/LLD/Paper32/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\"}", "{\"title\": \"interesting approach to functional dependency discovery with empirical support\", \"review\": [\"This paper introduces a structure learning framework for functional dependency (FD) discovery. The authors model the distribution of FDs over pairs of records by capturing dependencies of attribute-values in a graph structure.\", \"The authors compare their approach to manually-specified dependencies and automated state-of-the-art methods from the database and data mining community.\", \"I\\u2019ve included a list of strong points and points of confusion/questions below:\", \"Figure 1 showing autoregression matrix shows incrementally verifies hypotheses in structure learning approach.\", \"Empirical results support strong improvements over baselines.\", \"The authors describe the key result: \\u201cto model the distribution that FDs impose over pairs of records instead of the joint distribution over the attribute-values of the input dataset\\u201d. Could you further explore the theoretical/empirical improvement here? This would strengthen the motivation of the approach.\", \"What were the trade-offs considered in database/data mining communities-- what are the conceptual limitations from these communities that the authors were able to overcome?\", \"Additional datasets -- how does the method perform on other datasets with different error/data repairing characteristics (i.e. mentioned in the HoloClean paper, or others)?\", \"The paper is well-written, and could be strengthened additional exploration to strengthen choice and comparison to baselines.The paper w ould be a reasonable fit for the workshop, under the challenges introduced via: \\u201cRepresentations to enforce structured prior knowledge (e.g. invariances, logic constraints).\\u201d\"], \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Elegant relaxation of functional dependency discovery as graph structure learning, validated by good empirical performance\", \"review\": \"Summary: The paper proposes a framework to relax the functional dependency (FD) discovery problem as a structure learning problem by focusing on the dependency between pairs of records. Graphical lasso is applied to obtain the sparse inverse covariance matrix, resulting in an approximate solution to the FD discovery problem. The proposed method is compared with prominent FD discovery methods such as PYRO and RFI, showing much higher precision and recall on a synthetic dataset. On a real dataset of data cleaning, the proposed method outperforms HoloClean, which uses manually written FDs.\", \"strengths\": \"1. The relaxation of the FD discovery problem to structure learning in elegant.\\n2. The empirical performance on synthetic and real datasets are impressive.\", \"weakness\": \"\", \"lack_of_theoretical_guarantee\": \"perhaps there's a way to get error bound on B_hat from the error bound for Glasso?\", \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}" ] }
r1eqsNXgdV
Heuristics for Image Generation from Scene Graphs
[ "Subarna Tripathi", "Anahita Bhiwandiwalla", "Alexei Bastidas", "Hanlin Tang" ]
Generating realistic images from scene graphs requires neural networks to be able to reason about object relationships and compositionality. Learning a sufficiently rich representation to facilitate this reasoning is challenging due to dataset limitations. Synthetic scene graphs from COCO only have basic geometric relationships, and Visual Genome scene graphs are replete with missing relations or mislabeled nodes. Existing scene graph to image models have two stages: (1) a scene composition stage, and an (2) image generation stage. In this paper, we propose two methods to improve the intermediate representation of these stages. First, we use visual heuristics to augment relationships between pairs of objects. Second, we introduce a graph convolution-based network to generate a scene graph context representation that enriches the image generation. These contributions significantly improve the scene composition (relation score of 59.8% compared to 51.2%) and image generation (74% versus 64% in mean relation opinion score). Introspection shows that these heuristics are particularly effective in learning differentiated representations for scenes with multiple instances of the same object category. Obtaining accurate and complete scene graph annotations is costly, and our use of heuristics and prior structure to enhance intermediate representations allows our model to compensate for limited or incomplete data.
[ "scene graph", "heuristic supervision", "data augmentation", "image generation" ]
Accept
https://openreview.net/pdf?id=r1eqsNXgdV
https://openreview.net/forum?id=r1eqsNXgdV
ICLR.cc/2019/Workshop/LLD
2019
{ "note_id": [ "ByxVgaYFtN", "SJgQ2_I_YN", "SyeHIV-_tV" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1554779372168, 1554700458683, 1554678860616 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/LLD/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/LLD/Paper31/AnonReviewer1" ], [ "ICLR.cc/2019/Workshop/LLD/Paper31/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\", \"comment\": \"The authors propose two contributions a) data augmentation techniques for scene graph to image generation as well as b) a new mechanism for the scene graph to image generation that maintains context using a GCNN.\", \"pros\": \"- The augmentation strategy makes sense and is reasonably illustrated \\n- Improve on highly relevant problem that is not well solved or easily evaluated as the authors mentioned\\n-The metareviewer and R1 appreciate the perceptual studies with amazon turk.\\n-The first contribution is well in keeping with the theme of this workshop.\", \"cons\": \"-As mentioned by R2 more details should really be included at least in the appendix. E.g. any major differences to the JJ pipeline such as size and form of the Graph CNN context rep. The authors also should cite the graphic copied from Johnson et al.\\n- No ablations to show the effect of the different contributions compared to JJ (it is not completely clear whether the gain comes from the augmentation, use of the context GCNN or from architecture/other changes to JJ pipeline).\\n-It should be made more clear if the scene augmentations are also used for the evaluation scene graphs and if so whether the JJ model also sees the same augmented scenes at evaluation. \\n\\nOverall this paper handles a very difficult and challenging problem and both contributions as well as the suggested evaluations are substantial.\"}", "{\"title\": \"Extensive evaluation of a data augmentation technique for image generation from scene graphs.\", \"review\": \"This paper deals with the problem of image generation from scene graphs, building on Johnson et al ( Image generation from scene graphs, 2018). There are three main contributions in the paper:\\n\\n1. a data augmentation scheme that employs heuristics to add fine-grained annotations of spatial relationships between pairs of objects in the scene, e.g., \\\"on top of\\\", \\\"left of\\\", \\\"behind\\\", etc. \\n\\n2. A graph neural network that adds context on top of object segmentation masks, to maintain information about the relationships between objects. \\n\\n3. A new evaluation metric that measures the compliance of the generated images to the (augmented) ground truth scene graph, as a fraction of the satisfied spatial relationships between objects in the ground truth. \\n\\nThe experiments are expensive, including even a perceptual study using amazon turkers, and the results show noticeably improved performance, compared to the baseline, both in terms of IOU and the new proposed metric (MORS). I have a question/remark though: when the authors describe the heuristics they used to augment the data, they claim that \\\" A is \\u2019in front of\\u2019 B if the bottom boundary of A\\u2019s bounding box is closer to the image\\u2019s bottom edge\\\". I don't think this is true in the case where the bounding box of B is fully contained in the bounding box of A, in which case B is in front of A.\", \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Article uses \\\"depth ordering from observers viewpoint\\\" between objects as heuristic to improve generation of realistic images from scene graphs by \\\"augmenting the scene graphs\\\". Article seems very vague on this very contribution.\", \"review\": \"The article claims to improve the scene graph to image generation task by augmenting the scene graph dataset with ordering of object information. The article also claims to have contributed a new graph CNN \\\"We use a scene graph context network to augment the representation for the generator as well as the discriminator\\\".\\n\\nBoth of these constributions are vague and incompletely defined.\\n\\nThough the article contains some experiments and a brief descroption of the ordering heuristic, while no mention of the Graph CNN, it is difficult to ascertain the contribution.\", \"rating\": \"1: Strong rejection\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}" ] }
rkx5oNQeOV
Adaptive Cross-Modal Few-Shot Learning
[ "Chen Xing", "Negar Rostamzadeh", "Boris N. Oreshkin", "Pedro O. Pinheiro" ]
Metric-based meta-learning techniques have successfully been applied to few-shot classification problems. However, leveraging cross-modal information in a few-shot setting has yet to be explored. When the support from visual information is limited in few-shot image classification, semantic representations (learned from unsupervised text corpora) can provide strong prior knowledge and context to help learning. Based on this intuition, we design a model that is able to leverage visual and semantic features in the context of few-shot classification. We propose an adaptive mechanism that is able to effectively combine both modalities conditioned on categories. Through a series of experiments, we show that our method boosts the performance of metric-based approaches by effectively exploiting language structure. Using this extra modality, our model bypass current unimodal state-of-the-art methods by a large margin on miniImageNet. The improvement in performance is particularly large when the number of shots are small.
[ "few-shot learning", "cross-modality" ]
Accept
https://openreview.net/pdf?id=rkx5oNQeOV
https://openreview.net/forum?id=rkx5oNQeOV
ICLR.cc/2019/Workshop/LLD
2019
{ "note_id": [ "Syl-3kafqN", "SkgQRkbtKN", "BJxpo7eOYV" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1555382184765, 1554743242719, 1554674596909 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/LLD/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/LLD/Paper30/AnonReviewer2" ], [ "ICLR.cc/2019/Workshop/LLD/Paper30/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\"}", "{\"title\": \"Interesting idea, but not the right experiments to prove the point\", \"review\": \"This paper proposes an approach for performing multi-modal few shot learning. The main contribution is a new way of combining visual (images) and text features (word embeddings derived from text) which enables the use of existing meta-learning approaches. According to the authors, there are no other multi-modal approaches for few-shot classification.\", \"pros\": [\"the problem seems important and useful.\", \"the entire paper is described clearly.\"], \"cons\": [\"this paper is not really doing few-shot learning, because according to section 3.2. and the experiments, the authors use the test labels in order to know which word embeddings to assign to each sample: \\\"[...] containing label embeddings of all categories in D_train \\u222a D_test\\\". In other words, the authors use the labels (which are the goal of the classification task) to find the match between the two input modalities (to know what Glove vector to assign to each image).\", \"the experiments compare the results only between this multimodal approach and visual approaches. I believe using the Glove embeddings alone (no visual input) could give very good results on their own, and it is thus crucial for the authors to compare with this scenario too.\", \"the explanation for why you chose this form for lambda_c is unclear: \\\"A very structured semantic space is a good choice for conditioning.\\\"\"], \"overall_conclusion\": \"While the tackled problem is important and the paper is very well written, I believe the setting and the chosen dataset are artificial, because it requires looking at the test labels to create their inputs. If the authors had chosen a different dataset where text and images come naturally together (e.g., image captioning tasks), this would indeed be a good contribution. Moreover, I believe the experiments do not cover an important setting that the authors should have compared with (i.e. using only word embeddings as input), to prove that their method gains benefits from both modalities.\", \"rating\": \"2: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Interesting new setup and model, but lacking important model details\", \"review\": \"Summary\\nThis paper proposes an approach for leveraging additional semantic information for solving the recently well-studied task of (visual) few-shot classification. \\n\\nThe authors demonstrate their particular approach based on the Prototypical Network model. This model is meta-learned via a sequence of tasks, with the goal in each being to correctly classify a set of \\u2018query\\u2019 images belonging to one of N classes after conditioning on a small handful of \\u2018support\\u2019 images from the same classes. Prototypical Networks\\u2019 mechanism of conditioning on the support set is to use it in order to create a prototype per class by averaging the corresponding support examples. Each query is then classified as the label of its nearest prototype. In this work, the prototype computation is modified to include an additional source of information: the word embedding of the corresponding class label. These two sources are combined via a convex combination with a learnable coefficient to decide the strength of each source. For the prototype of a particular class, this coefficient is defined as a sigmoid of the (transformed in a learnable way) word embedding of that class\\u2019 label.\\n\\nThey show experimentally that a particular variant of their proposed approach is able to surpass the (single modality) state-of-the-art on mini-ImageNet and tiered-ImageNet, with the greatest gains obtained in the 1-shot case. \\n\\nReview\", \"pros\": \"[+] The proposed problem is an appealing one to study, since in the non-low-shot scenario, analogous multi-modal approaches have shown to be beneficial. Further, the fact that the semantic information is obtained in an unsupervised fashion from word co-occurrences in text corpora makes this setup attractive as no additional labels are required.\\n[+] The positive experimental results indeed confirm that semantic embeddings offer useful complementary information and can aid in visual few-shot meta-learning.\", \"cons\": \"[-] While I understand that 4 pages is very little space, I found some important information missing pertaining to the models that are proposed and being compared here. In particular, I wasn\\u2019t sure what ProtoNets++ is (no citation or explanation is included). Further, it seems that they implemented this approach on top of both ProtoNets++ (yielding AM3-ProtoNets++) and TADAM (yielding AM3-TADAM). I assume that the model they describe is the former. While I am familiar with TADAM, it\\u2019s not obvious to me how exactly the semantic information is incorporated into that model. I feel it\\u2019s better to sacrifice some space on a short explanation of this and cut space from somewhere else instead.\\n[-] Another concern is regarding the potential leakage of information from the meta-test set into the meta-training phase through the word embeddings. Specifically, during the training of word embeddings on large corpora, it may be that the statistics of occurrence of words whose labels belong to the meta-test set of the visual task had influenced the shaping of the word embeddings of words whose labels are in the meta-training set. I understand that this might be hard to control, and I\\u2019m not sure how large of a leakage effect there would be, but it would be useful to comment on it.\\n\\nOverall, I feel this is an interesting problem, and this seems to be an interesting approach for addressing it, so I will recommend acceptance. In future experiments it would be interesting to address situations where not all visual concepts have associated word embeddings. I\\u2019m also curious if somehow episodically fine-tuning these word embeddings could yield additional performance gains.\", \"rating\": \"3: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
S1ltj47xdE
Passage Ranking with Weak Supervision
[ "Peng Xu", "Xiaofei Ma", "Ramesh Nallapati", "Bing Xiang" ]
In this paper, we propose a \textit{weak supervision} framework for neural ranking tasks based on the data programming paradigm \citep{Ratner2016}, which enables us to leverage multiple weak supervision signals from different sources. Empirically, we consider two sources of weak supervision signals, unsupervised ranking functions and semantic feature similarities. We train a BERT-based passage-ranking model (which achieves new state-of-the-art performances on two benchmark datasets with full supervision) in our weak supervision framework. Without using ground-truth training labels, BERT-PR models outperform BM25 baseline by a large margin on all three datasets and even beat the previous state-of-the-art results with full supervision on two of datasets.
[ "Passage Ranking", "Weak Supervision", "BERT Models" ]
Accept
https://openreview.net/pdf?id=S1ltj47xdE
https://openreview.net/forum?id=S1ltj47xdE
ICLR.cc/2019/Workshop/LLD
2019
{ "note_id": [ "SJx3e53z54", "HkxeVLCAYE" ], "note_type": [ "decision", "official_review" ], "note_created": [ 1555380723945, 1555125799728 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/LLD/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/LLD/Paper29/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\"}", "{\"title\": \"Good empirical contribution\", \"review\": \"The authors tackle the problem of passage ranking (i.e. given a query, rank the relevance of set of passages to this query), and propose using an interesting combination of two existing approaches: BERT, which has achieved state-of-the-art result on many similar NLP problems, and the weak supervision framework proposed by Ratner et al. (2016). The authors show that this combination obtains results that are better than the current fully supervised state-of-the-art.\\n\\nOverall, although the different components of this system are not novel, this work seems to have a good contribution as an application paper since the results look good, and the topic is also very relevant to this workshop. However, my most major concern is the comparison with other similar approaches (in terms of methods and results). Specifically, there seems to be a related paper that is not properly discussed, nor fully compared with in terms of results (see my comments below).\", \"strengths\": [\"the problem is very relevant to this workshop.\", \"the results look good.\", \"the explanations are generally clear and easy to follow.\"], \"major_weaknesses\": [\"it sounds from the authors' description that the work of Nogueira & Cho (2019) is very similar, and yet this paper doesn't discuss the similarities in enough detail. For instance \\\"Nogueira & Cho (2019) does not have an MLP module\\\" -- so what does it have instead? Also, do they also do weakly supervised training?\", \"why are the results of Nogueira & Cho (2019) not reported in the table (except for one number in the footnote)?\", \"the citation for the most similar work is incomplete, it only says \\\"Rodrigo Nogueira and Kyunghyun Cho. Passage Re-ranking with BERT. 2019.\\\", with no information where to find it.\", \"it's unclear from this paper whether there are other weakly supervised approaches on these datasets, other than the traditional ranking scores the authors used as baseline. If the aren't, that should be specified. If there are, they should be compared and reported in the table too.\"], \"minor_issues\": [\"authors refer to BM25 scores without ever explaining what they are (e.g. \\\"models trained on labels solely generated from BM25 scores\\\"), which can be an issue for anyone who hasn't specifically worked in information retrieval.\", \"there are a few grammatical mistakes.\", \"why did the authors chose the hidden state of the CLS token as the embedding that is used as input to the MLP?\", \"what is s_ij in the \\\"Supervised Training\\\" paragraph of section 2?\"], \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}" ] }
SyxKiVmedV
Explanation-Based Attention for Semi-Supervised Deep Active Learning
[ "Denis Gudovskiy", "Alec Hodgkinson", "Takuya Yamaguchi", "Sotaro Tsukizawa" ]
We introduce an attention mechanism to improve feature extraction for deep active learning (AL) in the semi-supervised setting. The proposed attention mechanism is based on recent methods to visually explain predictions made by DNNs. We apply the proposed explanation-based attention to MNIST and SVHN classification. The conducted experiments show accuracy improvements for the original and class-imbalanced datasets with the same number of training examples and faster long-tail convergence compared to uncertainty-based methods.
[ "active learning", "attention", "explanation", "feature extraction" ]
Accept
https://openreview.net/pdf?id=SyxKiVmedV
https://openreview.net/forum?id=SyxKiVmedV
ICLR.cc/2019/Workshop/LLD
2019
{ "note_id": [ "SJl17qhfq4", "Hke2Xie-c4", "ryxkScFiYV" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1555380759427, 1555266339683, 1554909750800 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/LLD/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/LLD/Paper28/AnonReviewer2" ], [ "ICLR.cc/2019/Workshop/LLD/Paper28/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\"}", "{\"title\": \"Review of \\\"Explanation-Based Attention for Semi-Supervised Deep Active Learning\\\"\", \"review\": \"The authors consider the setting of deep attention learning. It consists in selecting critical unlabelled data in a semi-labelled dataset, so that once labelled they can improve drastically the accuracy of the model. The approach of the authors consist in training a DNN that computes similarity between data, starting with a limited pool of labelled data points. To do so, they augment iteratively the dataset using a greedy approach.\\n\\nThe paper is well written, and even I am not not at all a specialist of the field I think I understood the main points of the paper.\\n\\nThe numerical experiments seems strong enough to be convinced by their approach.\", \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}", "{\"title\": \"A small but focused contribution on active learning\", \"review\": \"This paper presents a novel method to match feature similarities for selecting unlabeled samples for active learning to train a model more label-efficiently.\\nThe major innovation is to utilize Explanation-Based Attention (EBA) mechanism to improve matching feature similarities, which has been proved effective to attribute feature importance in computer vision domains.\\nThe experiments show it outperforms conventional uncertainty-based approaches, especially when classes are imbalanced.\\n\\nOverall, this paper is a small but focused contribution on active learning and well-written, clear for readers to follow.\\nThe presentation can be improved with a more detailed description of notations (e,g. N_b and V_a are not explained, though it's easy to guess their meanings).\\nAn illustrative figures of workflow mentioned in the section \\\"summary for the proposed method\\\" would be a plus.\\nThe paper also enjoys the merit that it has a brief, clear overview of recent AL research to put itself in a broader context.\", \"my_major_concerns_are_two_fold\": \"1) the intuition of utilizing integrated gradients (IG) and pseudo-labels is not super clear to me; 2) experiments should be more extensive.\\nThe authors assume that the way of using IGs as EBA for evaluating sample similarity by multiplying themselves with descriptor matrices can upweight features that \\\"a) are not class and instance-level discriminative, b) spatially represent features for a plurality of objects in the input. \\\"\\nThe assumption needs more justification.\\nFor example, a) why to use average pooling function for both gradients and features is reasonable, b) the derivation of R_b is of what properties such that the distribution between training data and validation set are more similar iteratively (so we can believe the set of b-th iteration is better than the set of {b-1}-th). Also, experiments can justify the assumption as well with more visual explanations on why the proposed AL method is better and reasonable.\\n\\nFor the title, I suggest the authors not to use \\\"explanation-based\\\" since it is a little bit misleading. Readers may expect the authors use some kinds of explanitions to improve AL. I would say \\\"Integrated Gradients-based Attention for Deep Active Learning\\\" would be better.\\n\\nThat being said, I enjoyed reading this paper and would like to see it accepted with better presentation and more justification, experiments.\", \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}" ] }
B1gKiN7luV
Split Batch Normalization: Improving Semi-Supervised Learning under Domain Shift
[ "Michał Zając", "Konrad Zolna", "Stanisław Jastrzębski" ]
Recent work has shown that using unlabeled data in semi-supervised learning is not always beneficial and can even hurt generalization, especially when there is a class mismatch between the unlabeled and labeled examples. We investigate this phenomenon for image classification and many other forms of domain shifts (e.g. salt-and-pepper noise). Our main contribution is showing how to benefit from additional unlabeled data that comes from a shifted distribution in batch-normalized neural networks. We achieve it by simply using separate batch normalization statistics for unlabeled examples. Due to its simplicity, we recommend it as a standard practice.
[ "semi-supervised learning", "domain shift", "image classification", "deep neural networks" ]
Accept
https://openreview.net/pdf?id=B1gKiN7luV
https://openreview.net/forum?id=B1gKiN7luV
ICLR.cc/2019/Workshop/LLD
2019
{ "note_id": [ "HkxRbBv0YV", "ryxuvOt6YN", "S1e-YHZ2tE", "r1xyFC0tYE" ], "note_type": [ "decision", "official_review", "official_review", "official_review" ], "note_created": [ 1555096838398, 1555040352133, 1554941305000, 1554800246551 ], "note_signatures": [ [ "ICLR.cc/2019/Workshop/LLD/Program_Chairs" ], [ "ICLR.cc/2019/Workshop/LLD/Paper27/AnonReviewer3" ], [ "ICLR.cc/2019/Workshop/LLD/Paper27/AnonReviewer1" ], [ "ICLR.cc/2019/Workshop/LLD/Paper27/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"Acceptance Decision\", \"decision\": \"Accept\"}", "{\"title\": \"Hypothesis clearly defined and tested\", \"review\": \"This paper improves performance in two modern semi-supervised learning (SSL) models which utilize batch-norm in cases where the models train on unlabeled data from a different distribution than labeled data. Their simple technique consists of calculating separate statistics for the unlabeled and labeled data in batch-norm.\\n\\nThe paper, which appears to be largely motivated by Section 4.4 of Oliver et al, flushes out the class mismatch problem presented in the aforementioned paper and also tests performance under domain shift. The choices for domain shift perturbations seem reasonable, if not totally realistic. \\n\\nAlthough the paper clearly demonstrates improved performance in models with batch-norm, I think that the discussion presented in Section 3.3 warrants additional investigation in future work. \\n\\nAll in all, the paper's hypothesis was clearly defined and tested with thorough experiments and explanation. In addition, the problem definition and proposed solution, though fairly narrow, fits neatly into the workshop format.\", \"editing_comment\": [\"\\u201cwhich we select 20 randomly classes as the supervised dataset.\\u201d\"], \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Simple idea which makes sense and works well in practice\", \"review\": \"Summary:\\n\\nThe authors argue that distribution shift can be detrimental when doing semi-supervised learning. As a simple fix, they propose to not share batchnorm statistics between labeled and unlabeled data. They show consistent improvement for the case where little unlabeled data contains examples for the classes of which labels are present.\", \"novelty\": \"The idea to have separate batchnorm parameters seems natural and also seems to work for the problem described here. However, conditional batchnorm is a well-known technique in general so the overall novelty is limited.\", \"rating\": \"3: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"An worthwhile observation that has its place in the workshop\", \"review\": \"This work proposes an approach to tackle the domain adaptation problem in semi-supervised learning, based on a decoupling of the computation of the batch statistics in the batch normalization layers.\\n\\nIn a setting where the unlabeled data does not follow the supervised data distribution, semi-supervised learning techniques can lead to a degradation of performance with respect to a purely supervised setting. In this work, it is shown that computing the batch normalization statistics separately for the unsupervised and for the supervised data can alleviate the domain shift and lead to improved semi-supervision.\", \"i_have_a_few_questions_and_remarks\": \"a) The introduction mentions the problem of the domain shift for the unlabeled data. I would add that it is unclear how one could benefit from unlabeled samples in the general case if those samples are completely out-of-domain: after all, the core idea of semi-supervised learning is to grasp a better prior on the data domain. I can see that the network can still learn information e.g. when the inputs share the same modality (RGB data) or has an overlap of the classes. Overall, I would make this clearer in the introduction what one expects from semi-supervised learning in an out-of-domain setting. One thing is that semi-supervision should not degrade performance w.r.t. a purely supervised setting, which can happen with current semi-supervised algorithms.\\n\\nb) I would also experiment with random noise unrelated to the supervised data distribution to see the limits of the approach, and study a case of extreme domain mismatch. In such setting, one would hope to match the purely supervised baseline performance. I expect a batch-norm adaptation to be insufficient for this.\\n\\nc) I assume that at test-time, the batch norm statistics computed on the supervised set are used; I would make this clear in the document.\\n\\nI think that adapting batch norm is sufficient in the experiments done but probably not a universal remedy to domain shift in semi-supervised learning, which could be shown with extreme distribution. In general, extra experiments could also show a more progressive evaluation of different shifts, between same-domain unsupervised data, and fully out-of-domain unsupervised data.\\n\\nI think however that this idea raises some valid points and introduces an easy fix that can be enough in some cases; moreover split-BN can stimulate new ideas related to domain shift and out-of-domain unsupervised learning. Therefore I believe this paper has its place in the workshop.\", \"rating\": \"4: Top 50% of accepted papers, clear accept\", \"confidence\": \"2: The reviewer is fairly confident that the evaluation is correct\"}" ] }